[jira] [Commented] (HADOOP-13483) file-create should throw error rather than overwrite directories
[ https://issues.apache.org/jira/browse/HADOOP-13483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423991#comment-15423991 ] Genmao Yu commented on HADOOP-13483: It means if the path is a directory, we should throw an exception directly. Well, the comment is misleading. > file-create should throw error rather than overwrite directories > > > Key: HADOOP-13483 > URL: https://issues.apache.org/jira/browse/HADOOP-13483 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13483-HADOOP-12756.002.patch, > HADOOP-13483-HADOOP-12756.003.patch, HADOOP-13483-HADOOP-12756.004.patch, > HADOOP-13483.001.patch > > > similar to [HADOOP-13188|https://issues.apache.org/jira/browse/HADOOP-13188] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13498) the number of multi-part upload part should not bigger than 10000
[ https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423976#comment-15423976 ] Genmao Yu commented on HADOOP-13498: In this case, I want to check whether current implementation can cover 1 parts case. Indeed, 100GB is too large. I will set the 'fs.oss.multipart.upload.size' to 100 * 1024 (the lowest limit of OSS multipart upload), but total size of file gets close to 1G. Can be accepted? Or any other suggestion? > the number of multi-part upload part should not bigger than 1 > - > > Key: HADOOP-13498 > URL: https://issues.apache.org/jira/browse/HADOOP-13498 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13498-HADOOP-12756.001.patch > > > We should not only throw exception when exceed 1 limit of multi-part > number, but should guarantee to upload any object no matter how big it is. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13487) Hadoop KMS doesn't clean up old delegation tokens stored in Zookeeper
[ https://issues.apache.org/jira/browse/HADOOP-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423972#comment-15423972 ] Hadoop QA commented on HADOOP-13487: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 42s{color} | {color:red} root generated 1 new + 709 unchanged - 1 fixed = 710 total (was 710) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 46s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 37m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12824083/HADOOP-13487.01.patch | | JIRA Issue | HADOOP-13487 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 75a210673a5f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2353271 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/10275/artifact/patchprocess/diff-compile-javac-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10275/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10275/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Hadoop KMS doesn't clean up old delegation tokens stored in Zookeeper > - > > Key: HADOOP-13487 > URL: https://issues.apache.org/jira/browse/HADOOP-13487 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.6.0 >Reporter: Alex Ivanov >Assignee:
[jira] [Updated] (HADOOP-13487) Hadoop KMS doesn't clean up old delegation tokens stored in Zookeeper
[ https://issues.apache.org/jira/browse/HADOOP-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13487: --- Attachment: HADOOP-13487.01.patch After further looking into this, I think we can simply go with option #1 above. On thread startup, {{PathChildrenCache}} need to load the znode anyways, which is the most time consuming operation. Patch 1 to express the idea, I will test it in a test cluster and update here. Benchmarked with 100k existing expired znodes, while kms start up takes minutes, the new node running in memory take about 2 seconds, which I think is fine. I intentionally ignored exceptions for compatibility - if the directory contains some znodes that can't be understood by ZKDTSM, KMS should still be able to start and run as normal. > Hadoop KMS doesn't clean up old delegation tokens stored in Zookeeper > - > > Key: HADOOP-13487 > URL: https://issues.apache.org/jira/browse/HADOOP-13487 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.6.0 >Reporter: Alex Ivanov >Assignee: Xiao Chen > Attachments: HADOOP-13487.01.patch > > > Configuration: > CDH 5.5.1 (Hadoop 2.6+) > KMS configured to store delegation tokens in Zookeeper > DEBUG logging enabled in /etc/hadoop-kms/conf/kms-log4j.properties > Findings: > It seems to me delegation tokens never get cleaned up from Zookeeper past > their renewal date. I can see in the logs that the removal thread is started > with the expected interval: > {code} > 2016-08-11 08:15:24,511 INFO AbstractDelegationTokenSecretManager - Starting > expired delegation token remover thread, tokenRemoverScanInterval=60 min(s) > {code} > However, I don't see any delegation token removals, indicated by the > following log message: > org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager > --> removeStoredToken(TokenIdent ident), line 769 [CDH] > {code} > if (LOG.isDebugEnabled()) { > LOG.debug("Removing ZKDTSMDelegationToken_" > + ident.getSequenceNumber()); > } > {code} > Meanwhile, I see a lot of expired delegation tokens in Zookeeper that don't > get cleaned up. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13487) Hadoop KMS doesn't clean up old delegation tokens stored in Zookeeper
[ https://issues.apache.org/jira/browse/HADOOP-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13487: --- Status: Patch Available (was: Open) > Hadoop KMS doesn't clean up old delegation tokens stored in Zookeeper > - > > Key: HADOOP-13487 > URL: https://issues.apache.org/jira/browse/HADOOP-13487 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.6.0 >Reporter: Alex Ivanov >Assignee: Xiao Chen > Attachments: HADOOP-13487.01.patch > > > Configuration: > CDH 5.5.1 (Hadoop 2.6+) > KMS configured to store delegation tokens in Zookeeper > DEBUG logging enabled in /etc/hadoop-kms/conf/kms-log4j.properties > Findings: > It seems to me delegation tokens never get cleaned up from Zookeeper past > their renewal date. I can see in the logs that the removal thread is started > with the expected interval: > {code} > 2016-08-11 08:15:24,511 INFO AbstractDelegationTokenSecretManager - Starting > expired delegation token remover thread, tokenRemoverScanInterval=60 min(s) > {code} > However, I don't see any delegation token removals, indicated by the > following log message: > org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager > --> removeStoredToken(TokenIdent ident), line 769 [CDH] > {code} > if (LOG.isDebugEnabled()) { > LOG.debug("Removing ZKDTSMDelegationToken_" > + ident.getSequenceNumber()); > } > {code} > Meanwhile, I see a lot of expired delegation tokens in Zookeeper that don't > get cleaned up. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13487) Hadoop KMS doesn't clean up old delegation tokens stored in Zookeeper
[ https://issues.apache.org/jira/browse/HADOOP-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen reassigned HADOOP-13487: -- Assignee: Xiao Chen > Hadoop KMS doesn't clean up old delegation tokens stored in Zookeeper > - > > Key: HADOOP-13487 > URL: https://issues.apache.org/jira/browse/HADOOP-13487 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.6.0 >Reporter: Alex Ivanov >Assignee: Xiao Chen > > Configuration: > CDH 5.5.1 (Hadoop 2.6+) > KMS configured to store delegation tokens in Zookeeper > DEBUG logging enabled in /etc/hadoop-kms/conf/kms-log4j.properties > Findings: > It seems to me delegation tokens never get cleaned up from Zookeeper past > their renewal date. I can see in the logs that the removal thread is started > with the expected interval: > {code} > 2016-08-11 08:15:24,511 INFO AbstractDelegationTokenSecretManager - Starting > expired delegation token remover thread, tokenRemoverScanInterval=60 min(s) > {code} > However, I don't see any delegation token removals, indicated by the > following log message: > org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager > --> removeStoredToken(TokenIdent ident), line 769 [CDH] > {code} > if (LOG.isDebugEnabled()) { > LOG.debug("Removing ZKDTSMDelegationToken_" > + ident.getSequenceNumber()); > } > {code} > Meanwhile, I see a lot of expired delegation tokens in Zookeeper that don't > get cleaned up. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13498) the number of multi-part upload part should not bigger than 10000
[ https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423888#comment-15423888 ] shimingfei commented on HADOOP-13498: - it is not practical for the test to create an file almost 100GB {code} +ContractTestUtils.createAndVerifyFile(fs, getTestPath(), +10001 * 10 * 1024 * 1024L); {code} > the number of multi-part upload part should not bigger than 1 > - > > Key: HADOOP-13498 > URL: https://issues.apache.org/jira/browse/HADOOP-13498 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13498-HADOOP-12756.001.patch > > > We should not only throw exception when exceed 1 limit of multi-part > number, but should guarantee to upload any object no matter how big it is. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13483) file-create should throw error rather than overwrite directories
[ https://issues.apache.org/jira/browse/HADOOP-13483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423868#comment-15423868 ] shimingfei commented on HADOOP-13483: - several minor comments: {code} + // get the status or throw an FNFE an to a {code} {code} +// path references a directory: automatic error what does automatic error mean? {code} > file-create should throw error rather than overwrite directories > > > Key: HADOOP-13483 > URL: https://issues.apache.org/jira/browse/HADOOP-13483 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13483-HADOOP-12756.002.patch, > HADOOP-13483-HADOOP-12756.003.patch, HADOOP-13483-HADOOP-12756.004.patch, > HADOOP-13483.001.patch > > > similar to [HADOOP-13188|https://issues.apache.org/jira/browse/HADOOP-13188] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13483) file-create should throw error rather than overwrite directories
[ https://issues.apache.org/jira/browse/HADOOP-13483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423861#comment-15423861 ] Hadoop QA commented on HADOOP-13483: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 36s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s{color} | {color:red} hadoop-tools/hadoop-aliyun in HADOOP-12756 has 8 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s{color} | {color:green} hadoop-aliyun in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12824071/HADOOP-13483-HADOOP-12756.004.patch | | JIRA Issue | HADOOP-13483 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c3950177a00b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-12756 / 8346f922 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/10274/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-aliyun-warnings.html | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10274/testReport/ | | modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10274/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > file-create should throw error rather than overwrite directories > > > Key: HADOOP-13483 > URL: https://issues.apache.org/jira/browse/HADOOP-13483 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >
[jira] [Commented] (HADOOP-13504) Refactor jni_common to conform to C89 restrictions imposed by Visual Studio 2010
[ https://issues.apache.org/jira/browse/HADOOP-13504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423854#comment-15423854 ] Hadoop QA commented on HADOOP-13504: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 12s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 32m 52s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12824064/HADOOP-13504-v1.patch | | JIRA Issue | HADOOP-13504 | | Optional Tests | asflicense compile cc mvnsite javac unit | | uname | Linux 797b319065bd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2353271 | | Default Java | 1.8.0_101 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10272/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10272/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Refactor jni_common to conform to C89 restrictions imposed by Visual Studio > 2010 > > > Key: HADOOP-13504 > URL: https://issues.apache.org/jira/browse/HADOOP-13504 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: SammiChen >Assignee: SammiChen > Attachments: HADOOP-13504-v1.patch > > > Piece of code in jni_common declares variables after the first statement in > function. This behavior is not allowed in compilers, such as Visual Studio > 2010, which only supports C89 C standards. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13491) fix several warnings from findbugs
[ https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423853#comment-15423853 ] shimingfei commented on HADOOP-13491: - [~drankye] Could you please also take a look? Thanks! > fix several warnings from findbugs > -- > > Key: HADOOP-13491 > URL: https://issues.apache.org/jira/browse/HADOOP-13491 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13491-HADOOP-12756.001.patch, > HADOOP-13491-HADOOP-12756.002.patch, HADOOP-13491-HADOOP-12756.003.patch, > HADOOP-13491-HADOOP-12756.004.patch > > > {code:title=Bad practice Warnings|borderStyle=solid} > Code Warning > RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores > result of java.io.InputStream.skip(long) > Bug type SR_NOT_CHECKED (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) > Called method java.io.InputStream.skip(long) > At AliyunOSSInputStream.java:[line 235] > RR > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() > ignores result of java.io.FileInputStream.skip(long) > Bug type SR_NOT_CHECKED (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream > In method > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() > Called method java.io.FileInputStream.skip(long) > At AliyunOSSOutputStream.java:[line 177] > RVExceptional return value of java.io.File.delete() ignored in > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close() > Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream > In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close() > Called method java.io.File.delete() > At AliyunOSSOutputStream.java:[line 116] > {code} > {code:title=Multithreaded correctness Warnings|borderStyle=solid} > Code Warning > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked > 90% of time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining > Synchronized 90% of the time > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Synchronized access at AliyunOSSInputStream.java:[line 106] > Synchronized access at AliyunOSSInputStream.java:[line 168] > Synchronized access at AliyunOSSInputStream.java:[line 189] > Synchronized access at AliyunOSSInputStream.java:[line 188] > Synchronized access at AliyunOSSInputStream.java:[line 188] > Synchronized access at AliyunOSSInputStream.java:[line 190] > Synchronized access at AliyunOSSInputStream.java:[line 113] > Synchronized access at AliyunOSSInputStream.java:[line 131] > Synchronized access at AliyunOSSInputStream.java:[line 131] > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of > time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position > Synchronized 66% of the time > dUnsynchronized access at AliyunOSSInputStream.java:[line 232] > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Unsynchronized access at AliyunOSSInputStream.java:[line 235] > Unsynchronized access at AliyunOSSInputStream.java:[line 236] > Unsynchronized access at AliyunOSSInputStream.java:[line 245] > Synchronized access at AliyunOSSInputStream.java:[line 222] > Synchronized access at AliyunOSSInputStream.java:[line 105] > Synchronized access at AliyunOSSInputStream.java:[line 167] > Synchronized access at AliyunOSSInputStream.java:[line 169] > Synchronized access at AliyunOSSInputStream.java:[line 187] > Synchronized access at AliyunOSSInputStream.java:[line 187] > Synchronized access at AliyunOSSInputStream.java:[line 113] > Synchronized access at AliyunOSSInputStream.java:[line 114] > Synchronized access at AliyunOSSInputStream.java:[line 130] > Synchronized access at AliyunOSSInputStream.java:[line 130] > Synchronized access at AliyunOSSInputStream.java:[line 259] > Synchronized access at AliyunOSSInputStream.java:[line 266] > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream; locked > 85% of time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.
[jira] [Updated] (HADOOP-13483) file-create should throw error rather than overwrite directories
[ https://issues.apache.org/jira/browse/HADOOP-13483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Genmao Yu updated HADOOP-13483: --- Attachment: HADOOP-13483-HADOOP-12756.004.patch > file-create should throw error rather than overwrite directories > > > Key: HADOOP-13483 > URL: https://issues.apache.org/jira/browse/HADOOP-13483 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13483-HADOOP-12756.002.patch, > HADOOP-13483-HADOOP-12756.003.patch, HADOOP-13483-HADOOP-12756.004.patch, > HADOOP-13483.001.patch > > > similar to [HADOOP-13188|https://issues.apache.org/jira/browse/HADOOP-13188] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13499) Support session credentials for authenticating with Aliyun
[ https://issues.apache.org/jira/browse/HADOOP-13499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423849#comment-15423849 ] Hadoop QA commented on HADOOP-13499: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 34s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 21s{color} | {color:red} hadoop-tools/hadoop-aliyun in HADOOP-12756 has 8 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s{color} | {color:green} hadoop-aliyun in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 17s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12824068/HADOOP-13499-HADOOP-12756.002.patch | | JIRA Issue | HADOOP-13499 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 845fe1fa92a7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-12756 / 8346f922 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/10273/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-aliyun-warnings.html | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10273/testReport/ | | modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10273/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Support session credentials for authenticating with Aliyun > -- > > Key: HADOOP-13499 > URL: https://issues.apache.org/jira/browse/HADOOP-13499 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Vers
[jira] [Commented] (HADOOP-13491) fix several warnings from findbugs
[ https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423846#comment-15423846 ] Genmao Yu commented on HADOOP-13491: I think current doc is OK > fix several warnings from findbugs > -- > > Key: HADOOP-13491 > URL: https://issues.apache.org/jira/browse/HADOOP-13491 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13491-HADOOP-12756.001.patch, > HADOOP-13491-HADOOP-12756.002.patch, HADOOP-13491-HADOOP-12756.003.patch, > HADOOP-13491-HADOOP-12756.004.patch > > > {code:title=Bad practice Warnings|borderStyle=solid} > Code Warning > RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores > result of java.io.InputStream.skip(long) > Bug type SR_NOT_CHECKED (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) > Called method java.io.InputStream.skip(long) > At AliyunOSSInputStream.java:[line 235] > RR > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() > ignores result of java.io.FileInputStream.skip(long) > Bug type SR_NOT_CHECKED (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream > In method > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() > Called method java.io.FileInputStream.skip(long) > At AliyunOSSOutputStream.java:[line 177] > RVExceptional return value of java.io.File.delete() ignored in > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close() > Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream > In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close() > Called method java.io.File.delete() > At AliyunOSSOutputStream.java:[line 116] > {code} > {code:title=Multithreaded correctness Warnings|borderStyle=solid} > Code Warning > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked > 90% of time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining > Synchronized 90% of the time > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Synchronized access at AliyunOSSInputStream.java:[line 106] > Synchronized access at AliyunOSSInputStream.java:[line 168] > Synchronized access at AliyunOSSInputStream.java:[line 189] > Synchronized access at AliyunOSSInputStream.java:[line 188] > Synchronized access at AliyunOSSInputStream.java:[line 188] > Synchronized access at AliyunOSSInputStream.java:[line 190] > Synchronized access at AliyunOSSInputStream.java:[line 113] > Synchronized access at AliyunOSSInputStream.java:[line 131] > Synchronized access at AliyunOSSInputStream.java:[line 131] > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of > time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position > Synchronized 66% of the time > dUnsynchronized access at AliyunOSSInputStream.java:[line 232] > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Unsynchronized access at AliyunOSSInputStream.java:[line 235] > Unsynchronized access at AliyunOSSInputStream.java:[line 236] > Unsynchronized access at AliyunOSSInputStream.java:[line 245] > Synchronized access at AliyunOSSInputStream.java:[line 222] > Synchronized access at AliyunOSSInputStream.java:[line 105] > Synchronized access at AliyunOSSInputStream.java:[line 167] > Synchronized access at AliyunOSSInputStream.java:[line 169] > Synchronized access at AliyunOSSInputStream.java:[line 187] > Synchronized access at AliyunOSSInputStream.java:[line 187] > Synchronized access at AliyunOSSInputStream.java:[line 113] > Synchronized access at AliyunOSSInputStream.java:[line 114] > Synchronized access at AliyunOSSInputStream.java:[line 130] > Synchronized access at AliyunOSSInputStream.java:[line 130] > Synchronized access at AliyunOSSInputStream.java:[line 259] > Synchronized access at AliyunOSSInputStream.java:[line 266] > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream; locked > 85% of time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.Ali
[jira] [Updated] (HADOOP-13499) Support session credentials for authenticating with Aliyun
[ https://issues.apache.org/jira/browse/HADOOP-13499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Genmao Yu updated HADOOP-13499: --- Attachment: HADOOP-13499-HADOOP-12756.002.patch > Support session credentials for authenticating with Aliyun > -- > > Key: HADOOP-13499 > URL: https://issues.apache.org/jira/browse/HADOOP-13499 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu >Priority: Minor > Fix For: HADOOP-12756 > > Attachments: HADOOP-13499-HADOOP-12756.001.patch, > HADOOP-13499-HADOOP-12756.002.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13504) Refactor jni_common to conform to C89 restrictions imposed by Visual Studio 2010
[ https://issues.apache.org/jira/browse/HADOOP-13504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-13504: --- Parent Issue: HADOOP-11842 (was: HADOOP-11264) > Refactor jni_common to conform to C89 restrictions imposed by Visual Studio > 2010 > > > Key: HADOOP-13504 > URL: https://issues.apache.org/jira/browse/HADOOP-13504 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: SammiChen >Assignee: SammiChen > Attachments: HADOOP-13504-v1.patch > > > Piece of code in jni_common declares variables after the first statement in > function. This behavior is not allowed in compilers, such as Visual Studio > 2010, which only supports C89 C standards. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13504) Refactor jni_common to conform to C89 restrictions imposed by Visual Studio 2010
[ https://issues.apache.org/jira/browse/HADOOP-13504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-13504: --- Status: Patch Available (was: Open) > Refactor jni_common to conform to C89 restrictions imposed by Visual Studio > 2010 > > > Key: HADOOP-13504 > URL: https://issues.apache.org/jira/browse/HADOOP-13504 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: SammiChen >Assignee: SammiChen > Attachments: HADOOP-13504-v1.patch > > > Piece of code in jni_common declares variables after the first statement in > function. This behavior is not allowed in compilers, such as Visual Studio > 2010, which only supports C89 C standards. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13504) Refactor jni_common to conform to C89 restrictions imposed by Visual Studio 2010
[ https://issues.apache.org/jira/browse/HADOOP-13504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423813#comment-15423813 ] SammiChen commented on HADOOP-13504: patch attached > Refactor jni_common to conform to C89 restrictions imposed by Visual Studio > 2010 > > > Key: HADOOP-13504 > URL: https://issues.apache.org/jira/browse/HADOOP-13504 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: SammiChen >Assignee: SammiChen > Attachments: HADOOP-13504-v1.patch > > > Piece of code in jni_common declares variables after the first statement in > function. This behavior is not allowed in compilers, such as Visual Studio > 2010, which only supports C89 C standards. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13504) Refactor jni_common to conform to C89 restrictions imposed by Visual Studio 2010
[ https://issues.apache.org/jira/browse/HADOOP-13504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-13504: --- Attachment: HADOOP-13504-v1.patch > Refactor jni_common to conform to C89 restrictions imposed by Visual Studio > 2010 > > > Key: HADOOP-13504 > URL: https://issues.apache.org/jira/browse/HADOOP-13504 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: SammiChen >Assignee: SammiChen > Attachments: HADOOP-13504-v1.patch > > > Piece of code in jni_common declares variables after the first statement in > function. This behavior is not allowed in compilers, such as Visual Studio > 2010, which only supports C89 C standards. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13491) fix several warnings from findbugs
[ https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423811#comment-15423811 ] shimingfei commented on HADOOP-13491: - how about changing the doc to {code} /** * Skips over n bytes of data from input stream, fail * if less than n bytes are skipped. * @param is the input stream. * @param n the number of bytes to be skipped. * @throws IOException if no enough bytes are skipped. */ {code} > fix several warnings from findbugs > -- > > Key: HADOOP-13491 > URL: https://issues.apache.org/jira/browse/HADOOP-13491 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13491-HADOOP-12756.001.patch, > HADOOP-13491-HADOOP-12756.002.patch, HADOOP-13491-HADOOP-12756.003.patch, > HADOOP-13491-HADOOP-12756.004.patch > > > {code:title=Bad practice Warnings|borderStyle=solid} > Code Warning > RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores > result of java.io.InputStream.skip(long) > Bug type SR_NOT_CHECKED (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) > Called method java.io.InputStream.skip(long) > At AliyunOSSInputStream.java:[line 235] > RR > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() > ignores result of java.io.FileInputStream.skip(long) > Bug type SR_NOT_CHECKED (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream > In method > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() > Called method java.io.FileInputStream.skip(long) > At AliyunOSSOutputStream.java:[line 177] > RVExceptional return value of java.io.File.delete() ignored in > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close() > Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream > In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close() > Called method java.io.File.delete() > At AliyunOSSOutputStream.java:[line 116] > {code} > {code:title=Multithreaded correctness Warnings|borderStyle=solid} > Code Warning > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked > 90% of time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining > Synchronized 90% of the time > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Synchronized access at AliyunOSSInputStream.java:[line 106] > Synchronized access at AliyunOSSInputStream.java:[line 168] > Synchronized access at AliyunOSSInputStream.java:[line 189] > Synchronized access at AliyunOSSInputStream.java:[line 188] > Synchronized access at AliyunOSSInputStream.java:[line 188] > Synchronized access at AliyunOSSInputStream.java:[line 190] > Synchronized access at AliyunOSSInputStream.java:[line 113] > Synchronized access at AliyunOSSInputStream.java:[line 131] > Synchronized access at AliyunOSSInputStream.java:[line 131] > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of > time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position > Synchronized 66% of the time > dUnsynchronized access at AliyunOSSInputStream.java:[line 232] > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Unsynchronized access at AliyunOSSInputStream.java:[line 235] > Unsynchronized access at AliyunOSSInputStream.java:[line 236] > Unsynchronized access at AliyunOSSInputStream.java:[line 245] > Synchronized access at AliyunOSSInputStream.java:[line 222] > Synchronized access at AliyunOSSInputStream.java:[line 105] > Synchronized access at AliyunOSSInputStream.java:[line 167] > Synchronized access at AliyunOSSInputStream.java:[line 169] > Synchronized access at AliyunOSSInputStream.java:[line 187] > Synchronized access at AliyunOSSInputStream.java:[line 187] > Synchronized access at AliyunOSSInputStream.java:[line 113] > Synchronized access at AliyunOSSInputStream.java:[line 114] > Synchronized access at AliyunOSSInputStream.java:[line 130] > Synchronized access at AliyunOSSInputStream.java:[line 130] > Synchronized access at AliyunOSSInputStream.java:[line 259] > Synchronized access at Aliy
[jira] [Created] (HADOOP-13504) Refactor jni_common to conform to C89 restrictions imposed by Visual Studio 2010
SammiChen created HADOOP-13504: -- Summary: Refactor jni_common to conform to C89 restrictions imposed by Visual Studio 2010 Key: HADOOP-13504 URL: https://issues.apache.org/jira/browse/HADOOP-13504 Project: Hadoop Common Issue Type: Sub-task Reporter: SammiChen Assignee: SammiChen Piece of code in jni_common declares variables after the first statement in function. This behavior is not allowed in compilers, such as Visual Studio 2010, which only supports C89 C standards. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13483) file-create should throw error rather than overwrite directories
[ https://issues.apache.org/jira/browse/HADOOP-13483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423804#comment-15423804 ] Hadoop QA commented on HADOOP-13483: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 34s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s{color} | {color:red} hadoop-tools/hadoop-aliyun in HADOOP-12756 has 8 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 9s{color} | {color:orange} hadoop-tools/hadoop-aliyun: The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s{color} | {color:green} hadoop-aliyun in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 20s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12824062/HADOOP-13483-HADOOP-12756.003.patch | | JIRA Issue | HADOOP-13483 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux fb8816f4733d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-12756 / 8346f922 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/10271/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-aliyun-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10271/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aliyun.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10271/testReport/ | | modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10271/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > file-create should throw error rather than overwrite directories > --
[jira] [Updated] (HADOOP-13483) file-create should throw error rather than overwrite directories
[ https://issues.apache.org/jira/browse/HADOOP-13483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Genmao Yu updated HADOOP-13483: --- Attachment: HADOOP-13483-HADOOP-12756.003.patch > file-create should throw error rather than overwrite directories > > > Key: HADOOP-13483 > URL: https://issues.apache.org/jira/browse/HADOOP-13483 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13483-HADOOP-12756.002.patch, > HADOOP-13483-HADOOP-12756.003.patch, HADOOP-13483.001.patch > > > similar to [HADOOP-13188|https://issues.apache.org/jira/browse/HADOOP-13188] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13491) fix several warnings from findbugs
[ https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Genmao Yu updated HADOOP-13491: --- Attachment: HADOOP-13491-HADOOP-12756.004.patch > fix several warnings from findbugs > -- > > Key: HADOOP-13491 > URL: https://issues.apache.org/jira/browse/HADOOP-13491 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13491-HADOOP-12756.001.patch, > HADOOP-13491-HADOOP-12756.002.patch, HADOOP-13491-HADOOP-12756.003.patch, > HADOOP-13491-HADOOP-12756.004.patch > > > {code:title=Bad practice Warnings|borderStyle=solid} > Code Warning > RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores > result of java.io.InputStream.skip(long) > Bug type SR_NOT_CHECKED (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) > Called method java.io.InputStream.skip(long) > At AliyunOSSInputStream.java:[line 235] > RR > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() > ignores result of java.io.FileInputStream.skip(long) > Bug type SR_NOT_CHECKED (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream > In method > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() > Called method java.io.FileInputStream.skip(long) > At AliyunOSSOutputStream.java:[line 177] > RVExceptional return value of java.io.File.delete() ignored in > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close() > Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream > In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close() > Called method java.io.File.delete() > At AliyunOSSOutputStream.java:[line 116] > {code} > {code:title=Multithreaded correctness Warnings|borderStyle=solid} > Code Warning > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked > 90% of time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining > Synchronized 90% of the time > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Synchronized access at AliyunOSSInputStream.java:[line 106] > Synchronized access at AliyunOSSInputStream.java:[line 168] > Synchronized access at AliyunOSSInputStream.java:[line 189] > Synchronized access at AliyunOSSInputStream.java:[line 188] > Synchronized access at AliyunOSSInputStream.java:[line 188] > Synchronized access at AliyunOSSInputStream.java:[line 190] > Synchronized access at AliyunOSSInputStream.java:[line 113] > Synchronized access at AliyunOSSInputStream.java:[line 131] > Synchronized access at AliyunOSSInputStream.java:[line 131] > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of > time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position > Synchronized 66% of the time > dUnsynchronized access at AliyunOSSInputStream.java:[line 232] > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Unsynchronized access at AliyunOSSInputStream.java:[line 235] > Unsynchronized access at AliyunOSSInputStream.java:[line 236] > Unsynchronized access at AliyunOSSInputStream.java:[line 245] > Synchronized access at AliyunOSSInputStream.java:[line 222] > Synchronized access at AliyunOSSInputStream.java:[line 105] > Synchronized access at AliyunOSSInputStream.java:[line 167] > Synchronized access at AliyunOSSInputStream.java:[line 169] > Synchronized access at AliyunOSSInputStream.java:[line 187] > Synchronized access at AliyunOSSInputStream.java:[line 187] > Synchronized access at AliyunOSSInputStream.java:[line 113] > Synchronized access at AliyunOSSInputStream.java:[line 114] > Synchronized access at AliyunOSSInputStream.java:[line 130] > Synchronized access at AliyunOSSInputStream.java:[line 130] > Synchronized access at AliyunOSSInputStream.java:[line 259] > Synchronized access at AliyunOSSInputStream.java:[line 266] > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream; locked > 85% of time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.ap
[jira] [Commented] (HADOOP-13499) Support session credentials for authenticating with Aliyun
[ https://issues.apache.org/jira/browse/HADOOP-13499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423785#comment-15423785 ] Hadoop QA commented on HADOOP-13499: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 45s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 20s{color} | {color:red} hadoop-tools/hadoop-aliyun in HADOOP-12756 has 8 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 9s{color} | {color:orange} hadoop-tools/hadoop-aliyun: The patch generated 42 new + 0 unchanged - 0 fixed = 42 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s{color} | {color:green} hadoop-aliyun in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 15s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 29s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12824052/HADOOP-13499-HADOOP-12756.001.patch | | JIRA Issue | HADOOP-13499 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux e85e6dc3d46a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-12756 / 8346f922 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/10269/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-aliyun-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10269/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aliyun.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10269/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-HADOOP-Build/10269/artifact/patchprocess/patch-asflicense-problems.txt | | modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10269/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated.
[jira] [Updated] (HADOOP-13499) Support session credentials for authenticating with Aliyun
[ https://issues.apache.org/jira/browse/HADOOP-13499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Genmao Yu updated HADOOP-13499: --- Status: Patch Available (was: In Progress) > Support session credentials for authenticating with Aliyun > -- > > Key: HADOOP-13499 > URL: https://issues.apache.org/jira/browse/HADOOP-13499 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu >Priority: Minor > Fix For: HADOOP-12756 > > Attachments: HADOOP-13499-HADOOP-12756.001.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13499) Support session credentials for authenticating with Aliyun
[ https://issues.apache.org/jira/browse/HADOOP-13499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Genmao Yu updated HADOOP-13499: --- Attachment: HADOOP-13499-HADOOP-12756.001.patch > Support session credentials for authenticating with Aliyun > -- > > Key: HADOOP-13499 > URL: https://issues.apache.org/jira/browse/HADOOP-13499 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu >Priority: Minor > Fix For: HADOOP-12756 > > Attachments: HADOOP-13499-HADOOP-12756.001.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections
[ https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423766#comment-15423766 ] Hadoop QA commented on HADOOP-12765: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 6m 21s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 29s{color} | {color:green} root: The patch generated 0 new + 81 unchanged - 1 fixed = 81 total (was 82) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 5s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 12s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 12s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 28s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 85m 36s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12824038/HADOOP-12765.004.patch | | JIRA Issue | HADOOP-12765 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle | | uname | Linux ea0bb0a
[jira] [Commented] (HADOOP-13491) fix several warnings from findbugs
[ https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423763#comment-15423763 ] Genmao Yu commented on HADOOP-13491: Thanks for your reviewing and guidance. > fix several warnings from findbugs > -- > > Key: HADOOP-13491 > URL: https://issues.apache.org/jira/browse/HADOOP-13491 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13491-HADOOP-12756.001.patch, > HADOOP-13491-HADOOP-12756.002.patch, HADOOP-13491-HADOOP-12756.003.patch > > > {code:title=Bad practice Warnings|borderStyle=solid} > Code Warning > RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores > result of java.io.InputStream.skip(long) > Bug type SR_NOT_CHECKED (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) > Called method java.io.InputStream.skip(long) > At AliyunOSSInputStream.java:[line 235] > RR > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() > ignores result of java.io.FileInputStream.skip(long) > Bug type SR_NOT_CHECKED (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream > In method > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() > Called method java.io.FileInputStream.skip(long) > At AliyunOSSOutputStream.java:[line 177] > RVExceptional return value of java.io.File.delete() ignored in > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close() > Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream > In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close() > Called method java.io.File.delete() > At AliyunOSSOutputStream.java:[line 116] > {code} > {code:title=Multithreaded correctness Warnings|borderStyle=solid} > Code Warning > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked > 90% of time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining > Synchronized 90% of the time > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Synchronized access at AliyunOSSInputStream.java:[line 106] > Synchronized access at AliyunOSSInputStream.java:[line 168] > Synchronized access at AliyunOSSInputStream.java:[line 189] > Synchronized access at AliyunOSSInputStream.java:[line 188] > Synchronized access at AliyunOSSInputStream.java:[line 188] > Synchronized access at AliyunOSSInputStream.java:[line 190] > Synchronized access at AliyunOSSInputStream.java:[line 113] > Synchronized access at AliyunOSSInputStream.java:[line 131] > Synchronized access at AliyunOSSInputStream.java:[line 131] > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of > time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position > Synchronized 66% of the time > dUnsynchronized access at AliyunOSSInputStream.java:[line 232] > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Unsynchronized access at AliyunOSSInputStream.java:[line 235] > Unsynchronized access at AliyunOSSInputStream.java:[line 236] > Unsynchronized access at AliyunOSSInputStream.java:[line 245] > Synchronized access at AliyunOSSInputStream.java:[line 222] > Synchronized access at AliyunOSSInputStream.java:[line 105] > Synchronized access at AliyunOSSInputStream.java:[line 167] > Synchronized access at AliyunOSSInputStream.java:[line 169] > Synchronized access at AliyunOSSInputStream.java:[line 187] > Synchronized access at AliyunOSSInputStream.java:[line 187] > Synchronized access at AliyunOSSInputStream.java:[line 113] > Synchronized access at AliyunOSSInputStream.java:[line 114] > Synchronized access at AliyunOSSInputStream.java:[line 130] > Synchronized access at AliyunOSSInputStream.java:[line 130] > Synchronized access at AliyunOSSInputStream.java:[line 259] > Synchronized access at AliyunOSSInputStream.java:[line 266] > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream; locked > 85% of time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field
[jira] [Comment Edited] (HADOOP-13491) fix several warnings from findbugs
[ https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423758#comment-15423758 ] shimingfei edited comment on HADOOP-13491 at 8/17/16 2:24 AM: -- format: n-total to n - total and how about change the following logic to do .. while ? {code} +cur = is.skip(n-total); +total += cur; +while((total < n) && (cur > 0)) { + cur = is.skip(n-total); + total += cur; +} {code} how about changing the info to Failed to skip " + n + " bytes, possibly due to EOF {code} + throw new IOException("Not able to skip " + n + " bytes, possibly due " + + "to end of input."); {code} was (Author: shimingfei): format: n-total to n - total and how about change the following logic to do .. while ? {code} +cur = is.skip(n-total); +total += cur; +while((total < n) && (cur > 0)) { + cur = is.skip(n-total); + total += cur; +} {code} how about changing the info to Failed to skip " + n + " bytes, possibly due to end of input {code} + throw new IOException("Not able to skip " + n + " bytes, possibly due " + + "to end of input."); {code} > fix several warnings from findbugs > -- > > Key: HADOOP-13491 > URL: https://issues.apache.org/jira/browse/HADOOP-13491 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13491-HADOOP-12756.001.patch, > HADOOP-13491-HADOOP-12756.002.patch, HADOOP-13491-HADOOP-12756.003.patch > > > {code:title=Bad practice Warnings|borderStyle=solid} > Code Warning > RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores > result of java.io.InputStream.skip(long) > Bug type SR_NOT_CHECKED (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) > Called method java.io.InputStream.skip(long) > At AliyunOSSInputStream.java:[line 235] > RR > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() > ignores result of java.io.FileInputStream.skip(long) > Bug type SR_NOT_CHECKED (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream > In method > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() > Called method java.io.FileInputStream.skip(long) > At AliyunOSSOutputStream.java:[line 177] > RVExceptional return value of java.io.File.delete() ignored in > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close() > Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream > In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close() > Called method java.io.File.delete() > At AliyunOSSOutputStream.java:[line 116] > {code} > {code:title=Multithreaded correctness Warnings|borderStyle=solid} > Code Warning > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked > 90% of time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining > Synchronized 90% of the time > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Synchronized access at AliyunOSSInputStream.java:[line 106] > Synchronized access at AliyunOSSInputStream.java:[line 168] > Synchronized access at AliyunOSSInputStream.java:[line 189] > Synchronized access at AliyunOSSInputStream.java:[line 188] > Synchronized access at AliyunOSSInputStream.java:[line 188] > Synchronized access at AliyunOSSInputStream.java:[line 190] > Synchronized access at AliyunOSSInputStream.java:[line 113] > Synchronized access at AliyunOSSInputStream.java:[line 131] > Synchronized access at AliyunOSSInputStream.java:[line 131] > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of > time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position > Synchronized 66% of the time > dUnsynchronized access at AliyunOSSInputStream.java:[line 232] > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Unsynchronized access at AliyunOSSInputStream.java:[line 235] > Unsynchronized access at AliyunOSSInputStream.java:[line 236] > Unsynchronized access at AliyunOSSInputStream.java:[line 245] > Synchronized acce
[jira] [Commented] (HADOOP-13491) fix several warnings from findbugs
[ https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423758#comment-15423758 ] shimingfei commented on HADOOP-13491: - format: n-total to n - total and how about change the following logic to do .. while ? {code} +cur = is.skip(n-total); +total += cur; +while((total < n) && (cur > 0)) { + cur = is.skip(n-total); + total += cur; +} {code} how about changing the info to Failed to skip " + n + " bytes, possibly due to end of input {code} + throw new IOException("Not able to skip " + n + " bytes, possibly due " + + "to end of input."); {code} > fix several warnings from findbugs > -- > > Key: HADOOP-13491 > URL: https://issues.apache.org/jira/browse/HADOOP-13491 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13491-HADOOP-12756.001.patch, > HADOOP-13491-HADOOP-12756.002.patch, HADOOP-13491-HADOOP-12756.003.patch > > > {code:title=Bad practice Warnings|borderStyle=solid} > Code Warning > RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores > result of java.io.InputStream.skip(long) > Bug type SR_NOT_CHECKED (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) > Called method java.io.InputStream.skip(long) > At AliyunOSSInputStream.java:[line 235] > RR > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() > ignores result of java.io.FileInputStream.skip(long) > Bug type SR_NOT_CHECKED (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream > In method > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() > Called method java.io.FileInputStream.skip(long) > At AliyunOSSOutputStream.java:[line 177] > RVExceptional return value of java.io.File.delete() ignored in > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close() > Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream > In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close() > Called method java.io.File.delete() > At AliyunOSSOutputStream.java:[line 116] > {code} > {code:title=Multithreaded correctness Warnings|borderStyle=solid} > Code Warning > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked > 90% of time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining > Synchronized 90% of the time > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Synchronized access at AliyunOSSInputStream.java:[line 106] > Synchronized access at AliyunOSSInputStream.java:[line 168] > Synchronized access at AliyunOSSInputStream.java:[line 189] > Synchronized access at AliyunOSSInputStream.java:[line 188] > Synchronized access at AliyunOSSInputStream.java:[line 188] > Synchronized access at AliyunOSSInputStream.java:[line 190] > Synchronized access at AliyunOSSInputStream.java:[line 113] > Synchronized access at AliyunOSSInputStream.java:[line 131] > Synchronized access at AliyunOSSInputStream.java:[line 131] > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of > time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position > Synchronized 66% of the time > dUnsynchronized access at AliyunOSSInputStream.java:[line 232] > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Unsynchronized access at AliyunOSSInputStream.java:[line 235] > Unsynchronized access at AliyunOSSInputStream.java:[line 236] > Unsynchronized access at AliyunOSSInputStream.java:[line 245] > Synchronized access at AliyunOSSInputStream.java:[line 222] > Synchronized access at AliyunOSSInputStream.java:[line 105] > Synchronized access at AliyunOSSInputStream.java:[line 167] > Synchronized access at AliyunOSSInputStream.java:[line 169] > Synchronized access at AliyunOSSInputStream.java:[line 187] > Synchronized access at AliyunOSSInputStream.java:[line 187] > Synchronized access at AliyunOSSInputStream.java:[line 113] > Synchronized access at AliyunOSSInputStream.java:[line 114] > Synchronized access at AliyunOSSInputStream.java:[line 130] > Sync
[jira] [Commented] (HADOOP-13503) Logging actual principal info from configuration
[ https://issues.apache.org/jira/browse/HADOOP-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423724#comment-15423724 ] Hadoop QA commented on HADOOP-13503: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 46s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 17s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 39m 50s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-common-project/hadoop-common | | | Redundant nullcheck of StringBuilder.toString(), which is known to be non-null in org.apache.hadoop.security.SaslRpcClient.getServerPrincipal(RpcHeaderProtos$RpcSaslProto$SaslAuth) Redundant null check at SaslRpcClient.java:is known to be non-null in org.apache.hadoop.security.SaslRpcClient.getServerPrincipal(RpcHeaderProtos$RpcSaslProto$SaslAuth) Redundant null check at SaslRpcClient.java:[line 337] | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12824034/HADOOP-13503.000.patch | | JIRA Issue | HADOOP-13503 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7bcde30e31dc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2353271 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/10267/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10267/testReport/ | | modules | C: hadoop-common-project/hadoop-c
[jira] [Updated] (HADOOP-13498) the number of multi-part upload part should not bigger than 10000
[ https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Genmao Yu updated HADOOP-13498: --- Description: We should not only throw exception when exceed 1 limit of multi-part number, but should guarantee to upload any object no matter how big it is. > the number of multi-part upload part should not bigger than 1 > - > > Key: HADOOP-13498 > URL: https://issues.apache.org/jira/browse/HADOOP-13498 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13498-HADOOP-12756.001.patch > > > We should not only throw exception when exceed 1 limit of multi-part > number, but should guarantee to upload any object no matter how big it is. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13498) the number of multi-part upload part should not bigger than 10000
[ https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423708#comment-15423708 ] Genmao Yu commented on HADOOP-13498: Got it, thanks for your kind suggestion! > the number of multi-part upload part should not bigger than 1 > - > > Key: HADOOP-13498 > URL: https://issues.apache.org/jira/browse/HADOOP-13498 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13498-HADOOP-12756.001.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13494) ReconfigurableBase can log sensitive information
[ https://issues.apache.org/jira/browse/HADOOP-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423693#comment-15423693 ] Sean Mackrory commented on HADOOP-13494: Confused by these results - the patches contains no tab characters and have only 2 lines that end in whitespace. The code compiled and TestConfigRedactor passed - other test failures appear completely unrelated, as are the findbugs issues. > ReconfigurableBase can log sensitive information > > > Key: HADOOP-13494 > URL: https://issues.apache.org/jira/browse/HADOOP-13494 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.2.0 >Reporter: Sean Mackrory >Assignee: Sean Mackrory > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13494-branch-2.6.001.patch, > HADOOP-13494-branch-2.7.001.patch, HADOOP-13494.001.patch, > HADOOP-13494.002.patch, HADOOP-13494.003.patch, HADOOP-13494.004.patch > > > ReconfigurableBase will log old and new configuration values, which may cause > sensitive parameters (most notably cloud storage keys, though there may be > other instances) to get included in the logs. > Given the currently small list of reconfigurable properties, an argument > could be made for simply not logging the property values at all, but this is > not the only instance where potentially sensitive configuration gets written > somewhere else in plaintext. I think a generic mechanism for redacting > sensitive information for textual display will be useful to some of the web > UIs too. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13470) GenericTestUtils$LogCapturer is flaky
[ https://issues.apache.org/jira/browse/HADOOP-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423690#comment-15423690 ] Hudson commented on HADOOP-13470: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10289 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10289/]) HADOOP-13470. GenericTestUtils$LogCapturer is flaky. (Contributed by (liuml07: rev 23532716fcd3f7e5e20b8f9fc66188041638510a) * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java > GenericTestUtils$LogCapturer is flaky > - > > Key: HADOOP-13470 > URL: https://issues.apache.org/jira/browse/HADOOP-13470 > Project: Hadoop Common > Issue Type: Bug > Components: test, util >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Labels: reviewed > Fix For: 2.8.0 > > Attachments: HADOOP-13470.000.patch, HADOOP-13470.001.patch, > HADOOP-13470.002.patch > > > {{GenericTestUtils$LogCapturer}} is useful for assertions against service > logs. However it should be fixed in following aspects: > # In the constructor, it uses the stdout appender's layout. > {code} > Layout layout = Logger.getRootLogger().getAppender("stdout").getLayout(); > {code} > However, the stdout appender may be named "console" or alike which makes the > constructor throw NPE. Actually the layout does not matter and we can use a > default pattern layout that only captures application logs. > # {{stopCapturing()}} method is not working. The major reason is that the > {{appender}} internal variable is never assigned and thus removing it to stop > capturing makes no sense. > # It does not support {{org.slf4j.Logger}} which is preferred to log4j in > many modules. > # There is no unit test for it. > This jira is to address these. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections
[ https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423688#comment-15423688 ] Min Shen commented on HADOOP-12765: --- [~jojochuang], Thanks a lot for reviewing the patch! I've updated it addressing these 2 issues. > HttpServer2 should switch to using the non-blocking SslSelectChannelConnector > to prevent performance degradation when handling SSL connections > -- > > Key: HADOOP-12765 > URL: https://issues.apache.org/jira/browse/HADOOP-12765 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.2, 2.6.3 >Reporter: Min Shen >Assignee: Min Shen > Attachments: HADOOP-12765.001.patch, HADOOP-12765.001.patch, > HADOOP-12765.002.patch, HADOOP-12765.003.patch, HADOOP-12765.004.patch, > blocking_1.png, blocking_2.png, unblocking.png > > > The current implementation uses the blocking SslSocketConnector which takes > the default maxIdleTime as 200 seconds. We noticed in our cluster that when > users use a custom client that accesses the WebHDFS REST APIs through https, > it could block all the 250 handler threads in NN jetty server, causing severe > performance degradation for accessing WebHDFS and NN web UI. Attached > screenshots (blocking_1.png and blocking_2.png) illustrate that when using > SslSocketConnector, the jetty handler threads are not released until the 200 > seconds maxIdleTime has passed. With sufficient number of SSL connections, > this issue could render NN HttpServer to become entirely irresponsive. > We propose to use the non-blocking SslSelectChannelConnector as a fix. We > have deployed the attached patch within our cluster, and have seen > significant improvement. The attached screenshot (unblocking.png) further > illustrates the behavior of NN jetty server after switching to using > SslSelectChannelConnector. > The patch further disables SSLv3 protocol on server side to preserve the > spirit of HADOOP-11260. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections
[ https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Min Shen updated HADOOP-12765: -- Attachment: HADOOP-12765.004.patch > HttpServer2 should switch to using the non-blocking SslSelectChannelConnector > to prevent performance degradation when handling SSL connections > -- > > Key: HADOOP-12765 > URL: https://issues.apache.org/jira/browse/HADOOP-12765 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.2, 2.6.3 >Reporter: Min Shen >Assignee: Min Shen > Attachments: HADOOP-12765.001.patch, HADOOP-12765.001.patch, > HADOOP-12765.002.patch, HADOOP-12765.003.patch, HADOOP-12765.004.patch, > blocking_1.png, blocking_2.png, unblocking.png > > > The current implementation uses the blocking SslSocketConnector which takes > the default maxIdleTime as 200 seconds. We noticed in our cluster that when > users use a custom client that accesses the WebHDFS REST APIs through https, > it could block all the 250 handler threads in NN jetty server, causing severe > performance degradation for accessing WebHDFS and NN web UI. Attached > screenshots (blocking_1.png and blocking_2.png) illustrate that when using > SslSocketConnector, the jetty handler threads are not released until the 200 > seconds maxIdleTime has passed. With sufficient number of SSL connections, > this issue could render NN HttpServer to become entirely irresponsive. > We propose to use the non-blocking SslSelectChannelConnector as a fix. We > have deployed the attached patch within our cluster, and have seen > significant improvement. The attached screenshot (unblocking.png) further > illustrates the behavior of NN jetty server after switching to using > SslSelectChannelConnector. > The patch further disables SSLv3 protocol on server side to preserve the > spirit of HADOOP-11260. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13470) GenericTestUtils$LogCapturer is flaky
[ https://issues.apache.org/jira/browse/HADOOP-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13470: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I have committed the v2 patch to {{trunk}}, {{branch-2}} and {{branch-2.8}}. Thanks for prompt review [~cnauroth]. > GenericTestUtils$LogCapturer is flaky > - > > Key: HADOOP-13470 > URL: https://issues.apache.org/jira/browse/HADOOP-13470 > Project: Hadoop Common > Issue Type: Bug > Components: test, util >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Labels: reviewed > Fix For: 2.8.0 > > Attachments: HADOOP-13470.000.patch, HADOOP-13470.001.patch, > HADOOP-13470.002.patch > > > {{GenericTestUtils$LogCapturer}} is useful for assertions against service > logs. However it should be fixed in following aspects: > # In the constructor, it uses the stdout appender's layout. > {code} > Layout layout = Logger.getRootLogger().getAppender("stdout").getLayout(); > {code} > However, the stdout appender may be named "console" or alike which makes the > constructor throw NPE. Actually the layout does not matter and we can use a > default pattern layout that only captures application logs. > # {{stopCapturing()}} method is not working. The major reason is that the > {{appender}} internal variable is never assigned and thus removing it to stop > capturing makes no sense. > # It does not support {{org.slf4j.Logger}} which is preferred to log4j in > many modules. > # There is no unit test for it. > This jira is to address these. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13494) ReconfigurableBase can log sensitive information
[ https://issues.apache.org/jira/browse/HADOOP-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423682#comment-15423682 ] Hadoop QA commented on HADOOP-13494: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 26s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 25s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 11s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s{color} | {color:green} branch-2.7 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 44s{color} | {color:red} hadoop-common-project/hadoop-common in branch-2.7 has 3 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 26s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 37s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 37s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 25s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 3 new + 120 unchanged - 1 fixed = 123 total (was 121) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 3449 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 1m 25s{color} | {color:red} The patch 90 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 1s{color} | {color:red} hadoop-common in the patch failed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 88m 28s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_101 Failed junit tests | hadoop.util.bloom.TestBloomFilters | | JDK v1.8.0_101 Timed out junit tests | org.apache.hadoop.conf.TestConfiguration | | JDK v1.7.0_101 Failed junit tests | hadoop.ipc.TestD
[jira] [Updated] (HADOOP-13503) Logging actual principal info from configuration
[ https://issues.apache.org/jira/browse/HADOOP-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HADOOP-13503: --- Status: Patch Available (was: Open) > Logging actual principal info from configuration > > > Key: HADOOP-13503 > URL: https://issues.apache.org/jira/browse/HADOOP-13503 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.7.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HADOOP-13503.000.patch > > > In SaslRpcClient#getServerPrincipal, it only printed out server advertised > principal. The actual principal we expect from configuration is quite useful > while debugging security related issues. It should also be logged. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13503) Logging actual principal info from configuration
[ https://issues.apache.org/jira/browse/HADOOP-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HADOOP-13503: --- Description: In SaslRpcClient#getServerPrincipal, it only printed out server advertised principal. The actual principal we expect from configuration is quite useful while debugging security related issues. It should also be logged. (was: In SaslRpcClient#getServerPrincipal, it only printed out server advertised principal. The actual principal we expected from configuration is quite useful while debugging security related issues. It should also be logged.) > Logging actual principal info from configuration > > > Key: HADOOP-13503 > URL: https://issues.apache.org/jira/browse/HADOOP-13503 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.7.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HADOOP-13503.000.patch > > > In SaslRpcClient#getServerPrincipal, it only printed out server advertised > principal. The actual principal we expect from configuration is quite useful > while debugging security related issues. It should also be logged. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13503) Logging actual principal info from configuration
[ https://issues.apache.org/jira/browse/HADOOP-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HADOOP-13503: --- Attachment: HADOOP-13503.000.patch I posted initial patch v000, please help to review it, thanks. > Logging actual principal info from configuration > > > Key: HADOOP-13503 > URL: https://issues.apache.org/jira/browse/HADOOP-13503 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.7.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HADOOP-13503.000.patch > > > In SaslRpcClient#getServerPrincipal, it only printed out server advertised > principal. The actual principal we expect from configuration is quite useful > while debugging security related issues. It should also be logged. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13324) s3a tests don't authenticate with S3 frankfurt (or other V4 auth only endpoints)
[ https://issues.apache.org/jira/browse/HADOOP-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423676#comment-15423676 ] Hudson commented on HADOOP-13324: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10288 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10288/]) HADOOP-13324. s3a tests don't authenticate with S3 frankfurt (or other (cnauroth: rev 3808876c7397ea68906bc5cc18fdf690c9c42131) * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/S3AScaleTestBase.java * (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3AInputStreamPerformance.java > s3a tests don't authenticate with S3 frankfurt (or other V4 auth only > endpoints) > > > Key: HADOOP-13324 > URL: https://issues.apache.org/jira/browse/HADOOP-13324 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Fix For: 2.8.0 > > Attachments: HADOOP-13324-branch-2-001.patch, > HADOOP-13324-branch-2-001.patch > > > S3A doesn't auth with S3 frankfurt. This installation only supports v4 API. > There are some JVM options which should set this, but even they don't appear > to be enough. It appears that we have to allow the s3a client to change the > endpoint with which it authenticates from a generic "AWS S3" to a > frankfurt-specific one. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13503) Logging actual principal info from configuration
[ https://issues.apache.org/jira/browse/HADOOP-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HADOOP-13503: --- Description: In SaslRpcClient#getServerPrincipal, it only printed out server advertised principal. The actual principal we expected from configuration is quite useful while debugging security related issues. It should also be logged. (was: In SaslRpcClient#getServerPrincipal, it only logged the server advertised principal. It should also log the expected principal name for easier debugging.) > Logging actual principal info from configuration > > > Key: HADOOP-13503 > URL: https://issues.apache.org/jira/browse/HADOOP-13503 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.7.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > In SaslRpcClient#getServerPrincipal, it only printed out server advertised > principal. The actual principal we expected from configuration is quite > useful while debugging security related issues. It should also be logged. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13503) Logging actual principal info from configuration
[ https://issues.apache.org/jira/browse/HADOOP-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HADOOP-13503: --- Description: In SaslRpcClient#getServerPrincipal, it only logged the server advertised principal. It should also log the expected principal name for easier debugging. was:In SaslRpcClient#getServerPrincipal, it only printed out server advertised principal. > Logging actual principal info from configuration > > > Key: HADOOP-13503 > URL: https://issues.apache.org/jira/browse/HADOOP-13503 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.7.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > In SaslRpcClient#getServerPrincipal, it only logged the server advertised > principal. > It should also log the expected principal name for easier debugging. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13503) Logging actual principal info from configuration
[ https://issues.apache.org/jira/browse/HADOOP-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HADOOP-13503: --- Description: In SaslRpcClient#getServerPrincipal, it only printed out server advertised principal. > Logging actual principal info from configuration > > > Key: HADOOP-13503 > URL: https://issues.apache.org/jira/browse/HADOOP-13503 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.7.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > In SaslRpcClient#getServerPrincipal, it only printed out server advertised > principal. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13503) Logging actual principal info from configuration
[ https://issues.apache.org/jira/browse/HADOOP-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HADOOP-13503: --- Affects Version/s: 2.7.0 > Logging actual principal info from configuration > > > Key: HADOOP-13503 > URL: https://issues.apache.org/jira/browse/HADOOP-13503 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.7.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13470) GenericTestUtils$LogCapturer is flaky
[ https://issues.apache.org/jira/browse/HADOOP-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423659#comment-15423659 ] Chris Nauroth commented on HADOOP-13470: +1 for patch v2. I verified {{TestFileSystemOperationsWithThreads}} and {{TestBootstrapStandby}}. Thank you, Mingliang. > GenericTestUtils$LogCapturer is flaky > - > > Key: HADOOP-13470 > URL: https://issues.apache.org/jira/browse/HADOOP-13470 > Project: Hadoop Common > Issue Type: Bug > Components: test, util >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Labels: reviewed > Fix For: 2.8.0 > > Attachments: HADOOP-13470.000.patch, HADOOP-13470.001.patch, > HADOOP-13470.002.patch > > > {{GenericTestUtils$LogCapturer}} is useful for assertions against service > logs. However it should be fixed in following aspects: > # In the constructor, it uses the stdout appender's layout. > {code} > Layout layout = Logger.getRootLogger().getAppender("stdout").getLayout(); > {code} > However, the stdout appender may be named "console" or alike which makes the > constructor throw NPE. Actually the layout does not matter and we can use a > default pattern layout that only captures application logs. > # {{stopCapturing()}} method is not working. The major reason is that the > {{appender}} internal variable is never assigned and thus removing it to stop > capturing makes no sense. > # It does not support {{org.slf4j.Logger}} which is preferred to log4j in > many modules. > # There is no unit test for it. > This jira is to address these. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13503) Logging actual principal info from configuration
[ https://issues.apache.org/jira/browse/HADOOP-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HADOOP-13503: --- Component/s: security > Logging actual principal info from configuration > > > Key: HADOOP-13503 > URL: https://issues.apache.org/jira/browse/HADOOP-13503 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.7.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13470) GenericTestUtils$LogCapturer is flaky
[ https://issues.apache.org/jira/browse/HADOOP-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423653#comment-15423653 ] Hadoop QA commented on HADOOP-13470: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 25s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 45m 49s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12824024/HADOOP-13470.002.patch | | JIRA Issue | HADOOP-13470 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7247a83cce75 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 27a6e09 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10266/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10266/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > GenericTestUtils$LogCapturer is flaky > - > > Key: HADOOP-13470 > URL: https://issues.apache.org/jira/browse/HADOOP-13470 > Project: Hadoop Common > Issue Type: Bug > Components: test, util >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Labels: reviewed > Fix For: 2.8.0 > > Attachments: HADOOP-13470.000.patch, HADOOP-13470.001.patch, > HADOOP-13470.002.patch > > > {{GenericTestUtils$LogCapturer}} is use
[jira] [Created] (HADOOP-13503) Logging actual principal info from configuration
Xiaobing Zhou created HADOOP-13503: -- Summary: Logging actual principal info from configuration Key: HADOOP-13503 URL: https://issues.apache.org/jira/browse/HADOOP-13503 Project: Hadoop Common Issue Type: Improvement Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13324) s3a tests don't authenticate with S3 frankfurt (or other V4 auth only endpoints)
[ https://issues.apache.org/jira/browse/HADOOP-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13324: --- Resolution: Fixed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) I have committed this to trunk, branch-2 and branch-2.8. Steve, thank you again. > s3a tests don't authenticate with S3 frankfurt (or other V4 auth only > endpoints) > > > Key: HADOOP-13324 > URL: https://issues.apache.org/jira/browse/HADOOP-13324 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Fix For: 2.8.0 > > Attachments: HADOOP-13324-branch-2-001.patch, > HADOOP-13324-branch-2-001.patch > > > S3A doesn't auth with S3 frankfurt. This installation only supports v4 API. > There are some JVM options which should set this, but even they don't appear > to be enough. It appears that we have to allow the s3a client to change the > endpoint with which it authenticates from a generic "AWS S3" to a > frankfurt-specific one. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13470) GenericTestUtils$LogCapturer is flaky
[ https://issues.apache.org/jira/browse/HADOOP-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13470: --- Hadoop Flags: (was: Reviewed) Status: Patch Available (was: Reopened) > GenericTestUtils$LogCapturer is flaky > - > > Key: HADOOP-13470 > URL: https://issues.apache.org/jira/browse/HADOOP-13470 > Project: Hadoop Common > Issue Type: Bug > Components: test, util >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Labels: reviewed > Fix For: 2.8.0 > > Attachments: HADOOP-13470.000.patch, HADOOP-13470.001.patch, > HADOOP-13470.002.patch > > > {{GenericTestUtils$LogCapturer}} is useful for assertions against service > logs. However it should be fixed in following aspects: > # In the constructor, it uses the stdout appender's layout. > {code} > Layout layout = Logger.getRootLogger().getAppender("stdout").getLayout(); > {code} > However, the stdout appender may be named "console" or alike which makes the > constructor throw NPE. Actually the layout does not matter and we can use a > default pattern layout that only captures application logs. > # {{stopCapturing()}} method is not working. The major reason is that the > {{appender}} internal variable is never assigned and thus removing it to stop > capturing makes no sense. > # It does not support {{org.slf4j.Logger}} which is preferred to log4j in > many modules. > # There is no unit test for it. > This jira is to address these. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13470) GenericTestUtils$LogCapturer is flaky
[ https://issues.apache.org/jira/browse/HADOOP-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423603#comment-15423603 ] Hudson commented on HADOOP-13470: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10287 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10287/]) Revert "HADOOP-13470. GenericTestUtils$LogCapturer is flaky. (liuml07: rev 27a6e09c4e22b9b5fee4e8ced7321eed92d566a4) * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java > GenericTestUtils$LogCapturer is flaky > - > > Key: HADOOP-13470 > URL: https://issues.apache.org/jira/browse/HADOOP-13470 > Project: Hadoop Common > Issue Type: Bug > Components: test, util >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Labels: reviewed > Fix For: 2.8.0 > > Attachments: HADOOP-13470.000.patch, HADOOP-13470.001.patch, > HADOOP-13470.002.patch > > > {{GenericTestUtils$LogCapturer}} is useful for assertions against service > logs. However it should be fixed in following aspects: > # In the constructor, it uses the stdout appender's layout. > {code} > Layout layout = Logger.getRootLogger().getAppender("stdout").getLayout(); > {code} > However, the stdout appender may be named "console" or alike which makes the > constructor throw NPE. Actually the layout does not matter and we can use a > default pattern layout that only captures application logs. > # {{stopCapturing()}} method is not working. The major reason is that the > {{appender}} internal variable is never assigned and thus removing it to stop > capturing makes no sense. > # It does not support {{org.slf4j.Logger}} which is preferred to log4j in > many modules. > # There is no unit test for it. > This jira is to address these. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13324) s3a tests don't authenticate with S3 frankfurt (or other V4 auth only endpoints)
[ https://issues.apache.org/jira/browse/HADOOP-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13324: --- Hadoop Flags: Reviewed Steve, thank you for your testing against multiple S3 endpoints and documenting your findings. +1 for the patch. I plan to commit this shortly after I apply one trivial fix for a typo in index.md: {code} This happens when trying to work with any S3 service which only supports the "V4" signing API —and he client is configured to use the default S3A service {code} "...and the client..." > s3a tests don't authenticate with S3 frankfurt (or other V4 auth only > endpoints) > > > Key: HADOOP-13324 > URL: https://issues.apache.org/jira/browse/HADOOP-13324 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13324-branch-2-001.patch, > HADOOP-13324-branch-2-001.patch > > > S3A doesn't auth with S3 frankfurt. This installation only supports v4 API. > There are some JVM options which should set this, but even they don't appear > to be enough. It appears that we have to allow the s3a client to change the > endpoint with which it authenticates from a generic "AWS S3" to a > frankfurt-specific one. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13470) GenericTestUtils$LogCapturer is flaky
[ https://issues.apache.org/jira/browse/HADOOP-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13470: --- Attachment: HADOOP-13470.002.patch Attaching the v2 patch according to latest discussion. Thanks! > GenericTestUtils$LogCapturer is flaky > - > > Key: HADOOP-13470 > URL: https://issues.apache.org/jira/browse/HADOOP-13470 > Project: Hadoop Common > Issue Type: Bug > Components: test, util >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Labels: reviewed > Fix For: 2.8.0 > > Attachments: HADOOP-13470.000.patch, HADOOP-13470.001.patch, > HADOOP-13470.002.patch > > > {{GenericTestUtils$LogCapturer}} is useful for assertions against service > logs. However it should be fixed in following aspects: > # In the constructor, it uses the stdout appender's layout. > {code} > Layout layout = Logger.getRootLogger().getAppender("stdout").getLayout(); > {code} > However, the stdout appender may be named "console" or alike which makes the > constructor throw NPE. Actually the layout does not matter and we can use a > default pattern layout that only captures application logs. > # {{stopCapturing()}} method is not working. The major reason is that the > {{appender}} internal variable is never assigned and thus removing it to stop > capturing makes no sense. > # It does not support {{org.slf4j.Logger}} which is preferred to log4j in > many modules. > # There is no unit test for it. > This jira is to address these. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-13470) GenericTestUtils$LogCapturer is flaky
[ https://issues.apache.org/jira/browse/HADOOP-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu reopened HADOOP-13470: > GenericTestUtils$LogCapturer is flaky > - > > Key: HADOOP-13470 > URL: https://issues.apache.org/jira/browse/HADOOP-13470 > Project: Hadoop Common > Issue Type: Bug > Components: test, util >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Labels: reviewed > Fix For: 2.8.0 > > Attachments: HADOOP-13470.000.patch, HADOOP-13470.001.patch > > > {{GenericTestUtils$LogCapturer}} is useful for assertions against service > logs. However it should be fixed in following aspects: > # In the constructor, it uses the stdout appender's layout. > {code} > Layout layout = Logger.getRootLogger().getAppender("stdout").getLayout(); > {code} > However, the stdout appender may be named "console" or alike which makes the > constructor throw NPE. Actually the layout does not matter and we can use a > default pattern layout that only captures application logs. > # {{stopCapturing()}} method is not working. The major reason is that the > {{appender}} internal variable is never assigned and thus removing it to stop > capturing makes no sense. > # It does not support {{org.slf4j.Logger}} which is preferred to log4j in > many modules. > # There is no unit test for it. > This jira is to address these. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13470) GenericTestUtils$LogCapturer is flaky
[ https://issues.apache.org/jira/browse/HADOOP-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423580#comment-15423580 ] Chris Nauroth commented on HADOOP-13470: Actually, this looks like a good candidate to revert and then post an updated patch, given that the original patch was isolated and there have been no additional conflicting changes. Thanks again, Mingliang. > GenericTestUtils$LogCapturer is flaky > - > > Key: HADOOP-13470 > URL: https://issues.apache.org/jira/browse/HADOOP-13470 > Project: Hadoop Common > Issue Type: Bug > Components: test, util >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Labels: reviewed > Fix For: 2.8.0 > > Attachments: HADOOP-13470.000.patch, HADOOP-13470.001.patch > > > {{GenericTestUtils$LogCapturer}} is useful for assertions against service > logs. However it should be fixed in following aspects: > # In the constructor, it uses the stdout appender's layout. > {code} > Layout layout = Logger.getRootLogger().getAppender("stdout").getLayout(); > {code} > However, the stdout appender may be named "console" or alike which makes the > constructor throw NPE. Actually the layout does not matter and we can use a > default pattern layout that only captures application logs. > # {{stopCapturing()}} method is not working. The major reason is that the > {{appender}} internal variable is never assigned and thus removing it to stop > capturing makes no sense. > # It does not support {{org.slf4j.Logger}} which is preferred to log4j in > many modules. > # There is no unit test for it. > This jira is to address these. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13470) GenericTestUtils$LogCapturer is flaky
[ https://issues.apache.org/jira/browse/HADOOP-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423577#comment-15423577 ] Chris Nauroth commented on HADOOP-13470: bq. A simple fix is to use default {{PatternLayout}} only if the {{stdout}} and {{console}} appender are not defined. That sounds perfect. That would be backward-compatible for any existing tests that rely on a specific pattern in log4j.properties. Thanks! Please feel free to notify me for code review on the new patch, and I'll try {{TestFileSystemOperationsWithThreads}} with it. > GenericTestUtils$LogCapturer is flaky > - > > Key: HADOOP-13470 > URL: https://issues.apache.org/jira/browse/HADOOP-13470 > Project: Hadoop Common > Issue Type: Bug > Components: test, util >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Labels: reviewed > Fix For: 2.8.0 > > Attachments: HADOOP-13470.000.patch, HADOOP-13470.001.patch > > > {{GenericTestUtils$LogCapturer}} is useful for assertions against service > logs. However it should be fixed in following aspects: > # In the constructor, it uses the stdout appender's layout. > {code} > Layout layout = Logger.getRootLogger().getAppender("stdout").getLayout(); > {code} > However, the stdout appender may be named "console" or alike which makes the > constructor throw NPE. Actually the layout does not matter and we can use a > default pattern layout that only captures application logs. > # {{stopCapturing()}} method is not working. The major reason is that the > {{appender}} internal variable is never assigned and thus removing it to stop > capturing makes no sense. > # It does not support {{org.slf4j.Logger}} which is preferred to log4j in > many modules. > # There is no unit test for it. > This jira is to address these. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13494) ReconfigurableBase can log sensitive information
[ https://issues.apache.org/jira/browse/HADOOP-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Mackrory updated HADOOP-13494: --- Attachment: HADOOP-13494-branch-2.7.001.patch HADOOP-13494-branch-2.6.001.patch Attaching patches for branch-2.6 and branch-2.7. Thanks! > ReconfigurableBase can log sensitive information > > > Key: HADOOP-13494 > URL: https://issues.apache.org/jira/browse/HADOOP-13494 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.2.0 >Reporter: Sean Mackrory >Assignee: Sean Mackrory > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13494-branch-2.6.001.patch, > HADOOP-13494-branch-2.7.001.patch, HADOOP-13494.001.patch, > HADOOP-13494.002.patch, HADOOP-13494.003.patch, HADOOP-13494.004.patch > > > ReconfigurableBase will log old and new configuration values, which may cause > sensitive parameters (most notably cloud storage keys, though there may be > other instances) to get included in the logs. > Given the currently small list of reconfigurable properties, an argument > could be made for simply not logging the property values at all, but this is > not the only instance where potentially sensitive configuration gets written > somewhere else in plaintext. I think a generic mechanism for redacting > sensitive information for textual display will be useful to some of the web > UIs too. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13470) GenericTestUtils$LogCapturer is flaky
[ https://issues.apache.org/jira/browse/HADOOP-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423563#comment-15423563 ] Mingliang Liu commented on HADOOP-13470: Thanks [~cnauroth] for the report and analysis. Sorry I was not aware of the case that captured logs format matters besides the application log itself. A simple fix is to use default {{PatternLayout}} only if the {{stdout}} and {{console}} appender are not defined. This should not make the existing code fail; and guards the cases where those appenders are not defined. At least, if an incoming test relys on the log format, it should define the format in module-specific log4j.properties. If this looks good, I can prepare a simple patch for this. {{TestBootstrapStandby#testSharedEditsMissingLogs}} asserts the log level (FATAL), which is the same problem. Thanks [~kihwal] for reporting this. This was also missed in pre-commit build. {code} private LogCapturer(Logger logger) { this.logger = logger; - this.appender = new WriterAppender(new PatternLayout(), sw); - logger.addAppender(appender); + Appender defaultAppender = Logger.getRootLogger().getAppender("stdout"); + if (defaultAppender == null) { +defaultAppender = Logger.getRootLogger().getAppender("console"); + } + final Layout layout = (defaultAppender == null) ? new PatternLayout() : defaultAppender.getLayout(); + this.appender = new WriterAppender(layout, sw); + logger.addAppender(this.appender); } {code} > GenericTestUtils$LogCapturer is flaky > - > > Key: HADOOP-13470 > URL: https://issues.apache.org/jira/browse/HADOOP-13470 > Project: Hadoop Common > Issue Type: Bug > Components: test, util >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Labels: reviewed > Fix For: 2.8.0 > > Attachments: HADOOP-13470.000.patch, HADOOP-13470.001.patch > > > {{GenericTestUtils$LogCapturer}} is useful for assertions against service > logs. However it should be fixed in following aspects: > # In the constructor, it uses the stdout appender's layout. > {code} > Layout layout = Logger.getRootLogger().getAppender("stdout").getLayout(); > {code} > However, the stdout appender may be named "console" or alike which makes the > constructor throw NPE. Actually the layout does not matter and we can use a > default pattern layout that only captures application logs. > # {{stopCapturing()}} method is not working. The major reason is that the > {{appender}} internal variable is never assigned and thus removing it to stop > capturing makes no sense. > # It does not support {{org.slf4j.Logger}} which is preferred to log4j in > many modules. > # There is no unit test for it. > This jira is to address these. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13494) ReconfigurableBase can log sensitive information
[ https://issues.apache.org/jira/browse/HADOOP-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423532#comment-15423532 ] Hudson commented on HADOOP-13494: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10286 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10286/]) HADOOP-13494. ReconfigurableBase can log sensitive information. (wang: rev 4b689e7a758a55cec2ca8398727feefc8ac21bfd) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java * (add) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigRedactor.java * (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurableBase.java * (add) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfigRedactor.java > ReconfigurableBase can log sensitive information > > > Key: HADOOP-13494 > URL: https://issues.apache.org/jira/browse/HADOOP-13494 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.2.0 >Reporter: Sean Mackrory >Assignee: Sean Mackrory > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13494.001.patch, HADOOP-13494.002.patch, > HADOOP-13494.003.patch, HADOOP-13494.004.patch > > > ReconfigurableBase will log old and new configuration values, which may cause > sensitive parameters (most notably cloud storage keys, though there may be > other instances) to get included in the logs. > Given the currently small list of reconfigurable properties, an argument > could be made for simply not logging the property values at all, but this is > not the only instance where potentially sensitive configuration gets written > somewhere else in plaintext. I think a generic mechanism for redacting > sensitive information for textual display will be useful to some of the web > UIs too. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13470) GenericTestUtils$LogCapturer is flaky
[ https://issues.apache.org/jira/browse/HADOOP-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423527#comment-15423527 ] Kihwal Lee commented on HADOOP-13470: - {{TestBootstrapStandby}} also started failing after this. > GenericTestUtils$LogCapturer is flaky > - > > Key: HADOOP-13470 > URL: https://issues.apache.org/jira/browse/HADOOP-13470 > Project: Hadoop Common > Issue Type: Bug > Components: test, util >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Labels: reviewed > Fix For: 2.8.0 > > Attachments: HADOOP-13470.000.patch, HADOOP-13470.001.patch > > > {{GenericTestUtils$LogCapturer}} is useful for assertions against service > logs. However it should be fixed in following aspects: > # In the constructor, it uses the stdout appender's layout. > {code} > Layout layout = Logger.getRootLogger().getAppender("stdout").getLayout(); > {code} > However, the stdout appender may be named "console" or alike which makes the > constructor throw NPE. Actually the layout does not matter and we can use a > default pattern layout that only captures application logs. > # {{stopCapturing()}} method is not working. The major reason is that the > {{appender}} internal variable is never assigned and thus removing it to stop > capturing makes no sense. > # It does not support {{org.slf4j.Logger}} which is preferred to log4j in > many modules. > # There is no unit test for it. > This jira is to address these. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13470) GenericTestUtils$LogCapturer is flaky
[ https://issues.apache.org/jira/browse/HADOOP-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423518#comment-15423518 ] Chris Nauroth commented on HADOOP-13470: Hello [~liuml07]. This patch unfortunately broke {{TestFileSystemOperationsWithThreads}} in hadoop-azure. It wasn't caught in pre-commit, because {{TestFileSystemOperationsWithThreads}} doesn't execute unless the build environment is configured with Azure Storage credentials (similar to hadoop-aws). The failing tests have a few assertions that check for log messages containing specific thread names: https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestFileSystemOperationsWithThreads.java#L285-L288 This is intended to validate that all expected threads in a parallelized operation performed some work. After {{LogCapturer}} switched to using a default {{PatternLayout}}, the thread name is no longer included in the captured logs. Possible solutions for this are either to revert the patch or change the instantiated {{PatternLayout}} to use a pattern that includes thread name. I scanned a few of our existing test log4j.properties files, and there isn't a single consistent pattern used across all of them right now, so I guess we'd just have to pick something reasonable and go with it. Let me know your thoughts. Thanks. > GenericTestUtils$LogCapturer is flaky > - > > Key: HADOOP-13470 > URL: https://issues.apache.org/jira/browse/HADOOP-13470 > Project: Hadoop Common > Issue Type: Bug > Components: test, util >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Labels: reviewed > Fix For: 2.8.0 > > Attachments: HADOOP-13470.000.patch, HADOOP-13470.001.patch > > > {{GenericTestUtils$LogCapturer}} is useful for assertions against service > logs. However it should be fixed in following aspects: > # In the constructor, it uses the stdout appender's layout. > {code} > Layout layout = Logger.getRootLogger().getAppender("stdout").getLayout(); > {code} > However, the stdout appender may be named "console" or alike which makes the > constructor throw NPE. Actually the layout does not matter and we can use a > default pattern layout that only captures application logs. > # {{stopCapturing()}} method is not working. The major reason is that the > {{appender}} internal variable is never assigned and thus removing it to stop > capturing makes no sense. > # It does not support {{org.slf4j.Logger}} which is preferred to log4j in > many modules. > # There is no unit test for it. > This jira is to address these. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13494) ReconfigurableBase can log sensitive information
[ https://issues.apache.org/jira/browse/HADOOP-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13494: - Fix Version/s: 3.0.0-alpha2 2.8.0 I've committed this to trunk, branch-2, branch-2.8, but beyond that there are significant conflicts due in part to some missing ReconfigurableBase changes. I took a look at backporting these, but it seems hard. Sean, do you mind preparing separate patches for branch-2.7 and branch-2.6? Thanks. > ReconfigurableBase can log sensitive information > > > Key: HADOOP-13494 > URL: https://issues.apache.org/jira/browse/HADOOP-13494 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.2.0 >Reporter: Sean Mackrory >Assignee: Sean Mackrory > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13494.001.patch, HADOOP-13494.002.patch, > HADOOP-13494.003.patch, HADOOP-13494.004.patch > > > ReconfigurableBase will log old and new configuration values, which may cause > sensitive parameters (most notably cloud storage keys, though there may be > other instances) to get included in the logs. > Given the currently small list of reconfigurable properties, an argument > could be made for simply not logging the property values at all, but this is > not the only instance where potentially sensitive configuration gets written > somewhere else in plaintext. I think a generic mechanism for redacting > sensitive information for textual display will be useful to some of the web > UIs too. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13494) ReconfigurableBase can log sensitive information
[ https://issues.apache.org/jira/browse/HADOOP-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423465#comment-15423465 ] Andrew Wang commented on HADOOP-13494: -- +1 LGTM, one little nit is that I try to avoid wildcard imports since they can pull in unknown stuff, but not necessary to fix here. I'll check this in shortly. > ReconfigurableBase can log sensitive information > > > Key: HADOOP-13494 > URL: https://issues.apache.org/jira/browse/HADOOP-13494 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.2.0 >Reporter: Sean Mackrory >Assignee: Sean Mackrory > Attachments: HADOOP-13494.001.patch, HADOOP-13494.002.patch, > HADOOP-13494.003.patch, HADOOP-13494.004.patch > > > ReconfigurableBase will log old and new configuration values, which may cause > sensitive parameters (most notably cloud storage keys, though there may be > other instances) to get included in the logs. > Given the currently small list of reconfigurable properties, an argument > could be made for simply not logging the property values at all, but this is > not the only instance where potentially sensitive configuration gets written > somewhere else in plaintext. I think a generic mechanism for redacting > sensitive information for textual display will be useful to some of the web > UIs too. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13501) Run contract tests guarded by "is-blobstore" flag against WASB.
[ https://issues.apache.org/jira/browse/HADOOP-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423372#comment-15423372 ] Hadoop QA commented on HADOOP-13501: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 8m 41s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12823984/HADOOP-13501.001.patch | | JIRA Issue | HADOOP-13501 | | Optional Tests | asflicense unit xml | | uname | Linux c0218027d386 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6c154ab | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10264/testReport/ | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10264/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Run contract tests guarded by "is-blobstore" flag against WASB. > --- > > Key: HADOOP-13501 > URL: https://issues.apache.org/jira/browse/HADOOP-13501 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Reporter: Chris Nauroth >Assignee: Chris Nauroth >Priority: Minor > Attachments: HADOOP-13501.001.patch > > > The {{fs.contract.is-blobstore}} flag guards against execution of several > contract tests when the file system is backed by a blob store. Even though > Azure Storage is a blob store, WASB is still capable of passing these tests, > because its implementation matches the semantics of HDFS in these areas. > This issue proposes setting the flag to {{false}} for improved regression > testing of WASB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13410) RunJar adds the content of the jar twice to the classpath
[ https://issues.apache.org/jira/browse/HADOOP-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423366#comment-15423366 ] Sangjin Lee commented on HADOOP-13410: -- This should not have any bearing (one way or the other) on HADOOP-12728. If I understood correctly, HADOOP-12728 seems to be an issue of the order between the jar in the argument and what's in the underlying CLASSPATH or HADOOP_CLASSPATH. This JIRA concerns removing the redundant entries for the jar, and does not affect the above ordering problem. > RunJar adds the content of the jar twice to the classpath > - > > Key: HADOOP-13410 > URL: https://issues.apache.org/jira/browse/HADOOP-13410 > Project: Hadoop Common > Issue Type: Bug > Components: util >Reporter: Sangjin Lee >Assignee: Yuanbo Liu > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13410.001.patch > > > Today when you run a "hadoop jar" command, the jar is unzipped to a temporary > location and gets added to the classloader. > However, the original jar itself is still added to the classpath. > {code} > List classPath = new ArrayList<>(); > classPath.add(new File(workDir + "/").toURI().toURL()); > classPath.add(file.toURI().toURL()); > classPath.add(new File(workDir, "classes/").toURI().toURL()); > File[] libs = new File(workDir, "lib").listFiles(); > if (libs != null) { > for (File lib : libs) { > classPath.add(lib.toURI().toURL()); > } > } > {code} > As a result, the contents of the jar are present in the classpath *twice* and > are completely redundant. Although this does not necessarily cause > correctness issues, some stricter code written to require a single presence > of files may fail. > I cannot think of a good reason why the jar should be added to the classpath > if the unjarred content was added to it. I think we should remove the jar > from the classpath. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13502) Rename/split fs.contract.is-blobstore flag used by contract tests.
[ https://issues.apache.org/jira/browse/HADOOP-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423352#comment-15423352 ] Chris Nauroth commented on HADOOP-13502: HADOOP-13501 shows that a file system backed by a blob store (WASB) can pass these tests, depending on its implementation. > Rename/split fs.contract.is-blobstore flag used by contract tests. > -- > > Key: HADOOP-13502 > URL: https://issues.apache.org/jira/browse/HADOOP-13502 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Reporter: Chris Nauroth >Priority: Minor > > The {{fs.contract.is-blobstore}} flag guards against execution of several > contract tests to account for known limitations with blob stores. However, > the name is not entirely accurate, because it's still possible that a file > system implemented against a blob store could pass those tests, depending on > whether or not the implementation matches the semantics of HDFS. This issue > proposes to rename the flag or split it into different flags with different > definitions for the semantics covered by the current flag. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13502) Rename/split fs.contract.is-blobstore flag used by contract tests.
Chris Nauroth created HADOOP-13502: -- Summary: Rename/split fs.contract.is-blobstore flag used by contract tests. Key: HADOOP-13502 URL: https://issues.apache.org/jira/browse/HADOOP-13502 Project: Hadoop Common Issue Type: Improvement Components: test Reporter: Chris Nauroth Priority: Minor The {{fs.contract.is-blobstore}} flag guards against execution of several contract tests to account for known limitations with blob stores. However, the name is not entirely accurate, because it's still possible that a file system implemented against a blob store could pass those tests, depending on whether or not the implementation matches the semantics of HDFS. This issue proposes to rename the flag or split it into different flags with different definitions for the semantics covered by the current flag. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13501) Run contract tests guarded by "is-blobstore" flag against WASB.
[ https://issues.apache.org/jira/browse/HADOOP-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13501: --- Status: Patch Available (was: Open) > Run contract tests guarded by "is-blobstore" flag against WASB. > --- > > Key: HADOOP-13501 > URL: https://issues.apache.org/jira/browse/HADOOP-13501 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Reporter: Chris Nauroth >Assignee: Chris Nauroth >Priority: Minor > Attachments: HADOOP-13501.001.patch > > > The {{fs.contract.is-blobstore}} flag guards against execution of several > contract tests when the file system is backed by a blob store. Even though > Azure Storage is a blob store, WASB is still capable of passing these tests, > because its implementation matches the semantics of HDFS in these areas. > This issue proposes setting the flag to {{false}} for improved regression > testing of WASB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13501) Run contract tests guarded by "is-blobstore" flag against WASB.
[ https://issues.apache.org/jira/browse/HADOOP-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13501: --- Attachment: HADOOP-13501.001.patch I'm attaching a patch. I noticed this after a conversation about visibility of a file immediately upon creation. WASB does implement this behavior correctly by saving a zero-byte blob at the path during the {{create}} call before returning the stream. That means that {{AbstractContractCreateTest#testCreatedFileIsImmediatelyVisible}} can pass against WASB. I have done a full test run against my Azure Storage account. > Run contract tests guarded by "is-blobstore" flag against WASB. > --- > > Key: HADOOP-13501 > URL: https://issues.apache.org/jira/browse/HADOOP-13501 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Reporter: Chris Nauroth >Assignee: Chris Nauroth >Priority: Minor > Attachments: HADOOP-13501.001.patch > > > The {{fs.contract.is-blobstore}} flag guards against execution of several > contract tests when the file system is backed by a blob store. Even though > Azure Storage is a blob store, WASB is still capable of passing these tests, > because its implementation matches the semantics of HDFS in these areas. > This issue proposes setting the flag to {{false}} for improved regression > testing of WASB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13501) Run contract tests guarded by "is-blobstore" flag against WASB.
Chris Nauroth created HADOOP-13501: -- Summary: Run contract tests guarded by "is-blobstore" flag against WASB. Key: HADOOP-13501 URL: https://issues.apache.org/jira/browse/HADOOP-13501 Project: Hadoop Common Issue Type: Improvement Components: fs/azure Reporter: Chris Nauroth Assignee: Chris Nauroth Priority: Minor The {{fs.contract.is-blobstore}} flag guards against execution of several contract tests when the file system is backed by a blob store. Even though Azure Storage is a blob store, WASB is still capable of passing these tests, because its implementation matches the semantics of HDFS in these areas. This issue proposes setting the flag to {{false}} for improved regression testing of WASB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13252) Tune S3A provider plugin mechanism
[ https://issues.apache.org/jira/browse/HADOOP-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423335#comment-15423335 ] Chris Nauroth commented on HADOOP-13252: bq. In core-default.xml, please mention that the list of credentials provider classes is comma-separated. This comment also applies to the copy of the content in index.md. > Tune S3A provider plugin mechanism > -- > > Key: HADOOP-13252 > URL: https://issues.apache.org/jira/browse/HADOOP-13252 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13252-branch-2-001.patch, > HADOOP-13252-branch-2-003.patch, HADOOP-13252-branch-2-004.patch > > > We've now got some fairly complex auth mechanisms going on: -hadoop config, > KMS, env vars, "none". IF something isn't working, it's going to be a lot > harder to debug. > Review and tune the S3A provider point > * add logging of what's going on in s3 auth to help debug problems > * make a whole chain of logins expressible > * allow the anonymous credentials to be included in the list > * review and updated documents. > I propose *carefully* adding some debug messages to identify which auth > provider is doing the auth, so we can see if the env vars were kicking in, > sysprops, etc. > What we mustn't do is leak any secrets: this should be identifying whether > properties and env vars are set, not what their values are. I don't believe > that this will generate a security risk. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections
[ https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423306#comment-15423306 ] Wei-Chiu Chuang commented on HADOOP-12765: -- Thanks again for the match. Looks really good to me! One minor issue I noticed is {{configureChannelConnector(AbstractNIOConnector c)}}, which you could define as {{configureChannelConnector(SelectChannelConnector c)}}. Also, instead of specify the version of jetty-sslengine in hadoop-project/pom.xml, can you use the variable $\{jetty.version\} for version number instead? This will help us avoid inconsistency if we want to upgrade Jetty in the future. > HttpServer2 should switch to using the non-blocking SslSelectChannelConnector > to prevent performance degradation when handling SSL connections > -- > > Key: HADOOP-12765 > URL: https://issues.apache.org/jira/browse/HADOOP-12765 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.2, 2.6.3 >Reporter: Min Shen >Assignee: Min Shen > Attachments: HADOOP-12765.001.patch, HADOOP-12765.001.patch, > HADOOP-12765.002.patch, HADOOP-12765.003.patch, blocking_1.png, > blocking_2.png, unblocking.png > > > The current implementation uses the blocking SslSocketConnector which takes > the default maxIdleTime as 200 seconds. We noticed in our cluster that when > users use a custom client that accesses the WebHDFS REST APIs through https, > it could block all the 250 handler threads in NN jetty server, causing severe > performance degradation for accessing WebHDFS and NN web UI. Attached > screenshots (blocking_1.png and blocking_2.png) illustrate that when using > SslSocketConnector, the jetty handler threads are not released until the 200 > seconds maxIdleTime has passed. With sufficient number of SSL connections, > this issue could render NN HttpServer to become entirely irresponsive. > We propose to use the non-blocking SslSelectChannelConnector as a fix. We > have deployed the attached patch within our cluster, and have seen > significant improvement. The attached screenshot (unblocking.png) further > illustrates the behavior of NN jetty server after switching to using > SslSelectChannelConnector. > The patch further disables SSLv3 protocol on server side to preserve the > spirit of HADOOP-11260. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13500) Concurrency issues when using Configuration iterator
[ https://issues.apache.org/jira/browse/HADOOP-13500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423284#comment-15423284 ] Jason Lowe commented on HADOOP-13500: - See TEZ-3413 for a sample backtrace. I believe the fix is to simply lock the Property object while iterating it within the Configuration#iterator method, but I haven't thought through all of the potential interactions to see if we don't also have to lock the Configuration object itself as well or do something even more complicated. > Concurrency issues when using Configuration iterator > > > Key: HADOOP-13500 > URL: https://issues.apache.org/jira/browse/HADOOP-13500 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Reporter: Jason Lowe > > It is possible to encounter a ConcurrentModificationException while trying to > iterate a Configuration object. The iterator method tries to walk the > underlying Property object without proper synchronization, so another thread > simultaneously calling the set method can trigger it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13500) Concurrency issues when using Configuration iterator
Jason Lowe created HADOOP-13500: --- Summary: Concurrency issues when using Configuration iterator Key: HADOOP-13500 URL: https://issues.apache.org/jira/browse/HADOOP-13500 Project: Hadoop Common Issue Type: Bug Components: conf Reporter: Jason Lowe It is possible to encounter a ConcurrentModificationException while trying to iterate a Configuration object. The iterator method tries to walk the underlying Property object without proper synchronization, so another thread simultaneously calling the set method can trigger it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13446) Support running isolated unit tests separate from AWS integration tests.
[ https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13446: --- Target Version/s: 2.9.0 (was: HADOOP-13345) Summary: Support running isolated unit tests separate from AWS integration tests. (was: S3Guard: Support running isolated unit tests separate from AWS integration tests.) > Support running isolated unit tests separate from AWS integration tests. > > > Key: HADOOP-13446 > URL: https://issues.apache.org/jira/browse/HADOOP-13446 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Attachments: HADOOP-13446-HADOOP-13345.001.patch, > HADOOP-13446-HADOOP-13345.002.patch, HADOOP-13446-HADOOP-13345.003.patch > > > Currently, the hadoop-aws module only runs Surefire if AWS credentials have > been configured. This implies that all tests must run integrated with the > AWS back-end. It also means that no tests run as part of ASF pre-commit. > This issue proposes for the hadoop-aws module to support running isolated > unit tests without integrating with AWS. This will benefit S3Guard, because > we expect the need for isolated mock-based testing to simulate eventual > consistency behavior. It also benefits hadoop-aws in general by allowing > pre-commit to do something more valuable. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13447) Refactor S3AFileSystem to support introduction of separate metadata repository and tests.
[ https://issues.apache.org/jira/browse/HADOOP-13447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13447: --- Target Version/s: 2.9.0 (was: HADOOP-13345) Summary: Refactor S3AFileSystem to support introduction of separate metadata repository and tests. (was: S3Guard: Refactor S3AFileSystem to support introduction of separate metadata repository and tests.) > Refactor S3AFileSystem to support introduction of separate metadata > repository and tests. > - > > Key: HADOOP-13447 > URL: https://issues.apache.org/jira/browse/HADOOP-13447 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Attachments: HADOOP-13447-HADOOP-13446.001.patch, > HADOOP-13447-HADOOP-13446.002.patch > > > The scope of this issue is to refactor the existing {{S3AFileSystem}} into > multiple coordinating classes. The goal of this refactoring is to separate > the {{FileSystem}} API binding from the AWS SDK integration, make code > maintenance easier while we're making changes for S3Guard, and make it easier > to mock some implementation details so that tests can simulate eventual > consistency behavior in a deterministic way. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13498) the number of multi-part upload part should not bigger than 10000
[ https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423114#comment-15423114 ] Kihwal Lee commented on HADOOP-13498: - Hi, [~uncleGen], thanks for contributing. Your jira "Full Name" is going to be used for official change records. If you want a different name to appear in change logs, release notes, etc., please change the full name field of your jira profile from "uncleGen". > the number of multi-part upload part should not bigger than 1 > - > > Key: HADOOP-13498 > URL: https://issues.apache.org/jira/browse/HADOOP-13498 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: uncleGen >Assignee: uncleGen > Fix For: HADOOP-12756 > > Attachments: HADOOP-13498-HADOOP-12756.001.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13410) RunJar adds the content of the jar twice to the classpath
[ https://issues.apache.org/jira/browse/HADOOP-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15422952#comment-15422952 ] Allen Wittenauer commented on HADOOP-13410: --- I wonder how much this makes HADOOP-12728 worse. In other words, if the jar pointed to by the user is trying to override a class in the classpath, does this make it worse? > RunJar adds the content of the jar twice to the classpath > - > > Key: HADOOP-13410 > URL: https://issues.apache.org/jira/browse/HADOOP-13410 > Project: Hadoop Common > Issue Type: Bug > Components: util >Reporter: Sangjin Lee >Assignee: Yuanbo Liu > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13410.001.patch > > > Today when you run a "hadoop jar" command, the jar is unzipped to a temporary > location and gets added to the classloader. > However, the original jar itself is still added to the classpath. > {code} > List classPath = new ArrayList<>(); > classPath.add(new File(workDir + "/").toURI().toURL()); > classPath.add(file.toURI().toURL()); > classPath.add(new File(workDir, "classes/").toURI().toURL()); > File[] libs = new File(workDir, "lib").listFiles(); > if (libs != null) { > for (File lib : libs) { > classPath.add(lib.toURI().toURL()); > } > } > {code} > As a result, the contents of the jar are present in the classpath *twice* and > are completely redundant. Although this does not necessarily cause > correctness issues, some stricter code written to require a single presence > of files may fail. > I cannot think of a good reason why the jar should be added to the classpath > if the unjarred content was added to it. I think we should remove the jar > from the classpath. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12728) "hadoop jar my.jar" should probably prioritize my.jar in the classpath by default
[ https://issues.apache.org/jira/browse/HADOOP-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-12728: -- Assignee: (was: Allen Wittenauer) > "hadoop jar my.jar" should probably prioritize my.jar in the classpath by > default > - > > Key: HADOOP-12728 > URL: https://issues.apache.org/jira/browse/HADOOP-12728 > Project: Hadoop Common > Issue Type: Improvement > Components: scripts >Affects Versions: 2.7.1 >Reporter: Ovidiu Gheorghioiu >Priority: Minor > > Found this surprising behavior when testing a dev version of a jar that was > already in the hadoop classpath:"hadoop jar ./my.jar" used the system > my.jar, which was an old version that did not contain my bug fix. Since > "hadoop jar" is the rough equivalent of running an executable, it should use > the version passed on the command line. > Even worse than my case (which took a while to figure out with log messages) > is when I'd be testing that the new version works the same as the old in some > use case. Then I'd think it did, even though the new version was completely > broken. > Allen mentioned verbally that there are some tricky aspects to this, but to > open this issue for tracking / brainstorming. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-7678) Nightly build+test should run with "continue on error" for automated testing after successful build
[ https://issues.apache.org/jira/browse/HADOOP-7678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15422944#comment-15422944 ] Allen Wittenauer edited comment on HADOOP-7678 at 8/16/16 4:08 PM: --- This was fixed by switching to Apache Yetus qbt for nightly builds. Closing. was (Author: aw): Switching to Apache Yetus QBT fixed this. Closing. > Nightly build+test should run with "continue on error" for automated testing > after successful build > --- > > Key: HADOOP-7678 > URL: https://issues.apache.org/jira/browse/HADOOP-7678 > Project: Hadoop Common > Issue Type: Test > Components: build >Affects Versions: 0.20.205.0, 0.23.0 >Reporter: Matt Foley >Assignee: Allen Wittenauer > > It appears that scripts for nightly build in Apache Jenkins will stop after > unit testing if any unit tests fail. Therefore, contribs, schedulers, and > some other system-level automated tests don't ever run until the unit tests > are clean. This results in two-phase cleanup of broken builds, which wastes > developers' time. Please change them to run even in the presence of unit > test errors, as long as the compile+packaging build successfully. > This jira does not relate to CI builds, which emphasize test-patch execution. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-7678) Nightly build+test should run with "continue on error" for automated testing after successful build
[ https://issues.apache.org/jira/browse/HADOOP-7678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer resolved HADOOP-7678. -- Resolution: Fixed Switching to Apache Yetus QBT fixed this. Closing. > Nightly build+test should run with "continue on error" for automated testing > after successful build > --- > > Key: HADOOP-7678 > URL: https://issues.apache.org/jira/browse/HADOOP-7678 > Project: Hadoop Common > Issue Type: Test > Components: build >Affects Versions: 0.20.205.0, 0.23.0 >Reporter: Matt Foley >Assignee: Allen Wittenauer > > It appears that scripts for nightly build in Apache Jenkins will stop after > unit testing if any unit tests fail. Therefore, contribs, schedulers, and > some other system-level automated tests don't ever run until the unit tests > are clean. This results in two-phase cleanup of broken builds, which wastes > developers' time. Please change them to run even in the presence of unit > test errors, as long as the compile+packaging build successfully. > This jira does not relate to CI builds, which emphasize test-patch execution. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13494) ReconfigurableBase can log sensitive information
[ https://issues.apache.org/jira/browse/HADOOP-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15422908#comment-15422908 ] Sean Mackrory commented on HADOOP-13494: No test is labelled as a failure, but it reports a timeout and previous successful runs show 3547 tests. 7 are missing in this run... {code}Results : Tests run: 3540, Failures: 0, Errors: 0, Skipped: 141 [INFO] [INFO] BUILD FAILURE [INFO] [INFO] Total time: 16:49.541s [INFO] Finished at: Tue Aug 16 02:17:48 UTC 2016 [INFO] Final Memory: 24M/287M [INFO] [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-common: There was a timeout or other error in the fork -> [Help 1] [ERROR] {code} > ReconfigurableBase can log sensitive information > > > Key: HADOOP-13494 > URL: https://issues.apache.org/jira/browse/HADOOP-13494 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.2.0 >Reporter: Sean Mackrory >Assignee: Sean Mackrory > Attachments: HADOOP-13494.001.patch, HADOOP-13494.002.patch, > HADOOP-13494.003.patch, HADOOP-13494.004.patch > > > ReconfigurableBase will log old and new configuration values, which may cause > sensitive parameters (most notably cloud storage keys, though there may be > other instances) to get included in the logs. > Given the currently small list of reconfigurable properties, an argument > could be made for simply not logging the property values at all, but this is > not the only instance where potentially sensitive configuration gets written > somewhere else in plaintext. I think a generic mechanism for redacting > sensitive information for textual display will be useful to some of the web > UIs too. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13419) Fix javadoc warnings by JDK8 in hadoop-common package
[ https://issues.apache.org/jira/browse/HADOOP-13419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15422777#comment-15422777 ] Hadoop QA commented on HADOOP-13419: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 16s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 25s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 19s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 50s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 27s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 25s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 25s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 28s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 243 unchanged - 3 fixed = 244 total (was 246) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} hadoop-common-project_hadoop-common-jdk1.8.0_101 with JDK v1.8.0_101 generated 0 new + 0 unchanged - 6 fixed = 0 total (was 6) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} hadoop-common-project_hadoop-common-jdk1.7.0_101 with JDK v1.7.0_101 generated 0 new + 7 unchanged - 6 fixed = 7 total (was 13) {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 13s{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 77m 51s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:b59b8b7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12823886/HADOOP-13419-branch-2.01.patch | | JIRA Issue | HADOOP-13419 | | Optional Tests | asflicense compile javac javadoc mvninstall
[jira] [Updated] (HADOOP-13419) Fix javadoc warnings by JDK8 in hadoop-common package
[ https://issues.apache.org/jira/browse/HADOOP-13419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Sasaki updated HADOOP-13419: Attachment: HADOOP-13419-branch-2.01.patch > Fix javadoc warnings by JDK8 in hadoop-common package > - > > Key: HADOOP-13419 > URL: https://issues.apache.org/jira/browse/HADOOP-13419 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: HADOOP-13419-branch-2.01.patch, HADOOP-13419.01.patch, > HADOOP-13419.02.patch, HADOOP-13419.03.patch > > > Fix compile warning generated after migrate JDK8. > This is a subtask of HADOOP-13369. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13499) Support session credentials for authenticating with Aliyun
[ https://issues.apache.org/jira/browse/HADOOP-13499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] uncleGen updated HADOOP-13499: -- Priority: Minor (was: Major) > Support session credentials for authenticating with Aliyun > -- > > Key: HADOOP-13499 > URL: https://issues.apache.org/jira/browse/HADOOP-13499 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: uncleGen >Assignee: uncleGen >Priority: Minor > Fix For: HADOOP-12756 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13498) the number of multi-part upload part should not bigger than 10000
[ https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15422555#comment-15422555 ] Hadoop QA commented on HADOOP-13498: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 15s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 25s{color} | {color:red} hadoop-tools/hadoop-aliyun in HADOOP-12756 has 8 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s{color} | {color:green} hadoop-aliyun in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12823860/HADOOP-13498-HADOOP-12756.001.patch | | JIRA Issue | HADOOP-13498 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 1a6c598f5769 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-12756 / 8346f922 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/10262/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-aliyun-warnings.html | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10262/testReport/ | | modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10262/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > the number of multi-part upload part should not bigger than 1 > - > > Key: HADOOP-13498 > URL: https://issues.apache.org/jira/browse/HADOOP-13498 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >
[jira] [Commented] (HADOOP-13491) fix several warnings from findbugs
[ https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15422556#comment-15422556 ] Hadoop QA commented on HADOOP-13491: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 15s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} HADOOP-12756 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 25s{color} | {color:red} hadoop-tools/hadoop-aliyun in HADOOP-12756 has 8 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 32s{color} | {color:green} hadoop-tools/hadoop-aliyun generated 0 new + 0 unchanged - 8 fixed = 0 total (was 8) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s{color} | {color:green} hadoop-aliyun in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 23s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12823855/HADOOP-13491-HADOOP-12756.003.patch | | JIRA Issue | HADOOP-13491 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 9fc48c2efc46 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-12756 / 8346f922 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/10261/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-aliyun-warnings.html | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10261/testReport/ | | modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10261/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > fix several warnings from findbugs > -- > > Key: HADOOP-13491 > URL: https://issues
[jira] [Work started] (HADOOP-13499) Support session credentials for authenticating with Aliyun
[ https://issues.apache.org/jira/browse/HADOOP-13499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-13499 started by uncleGen. - > Support session credentials for authenticating with Aliyun > -- > > Key: HADOOP-13499 > URL: https://issues.apache.org/jira/browse/HADOOP-13499 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: uncleGen >Assignee: uncleGen > Fix For: HADOOP-12756 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13498) the number of multi-part upload part should not bigger than 10000
[ https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] uncleGen updated HADOOP-13498: -- Attachment: (was: HADOOP-13498-HADOOP-12756.001.patch) > the number of multi-part upload part should not bigger than 1 > - > > Key: HADOOP-13498 > URL: https://issues.apache.org/jira/browse/HADOOP-13498 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: uncleGen >Assignee: uncleGen > Fix For: HADOOP-12756 > > Attachments: HADOOP-13498-HADOOP-12756.001.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13498) the number of multi-part upload part should not bigger than 10000
[ https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] uncleGen updated HADOOP-13498: -- Attachment: HADOOP-13498-HADOOP-12756.001.patch > the number of multi-part upload part should not bigger than 1 > - > > Key: HADOOP-13498 > URL: https://issues.apache.org/jira/browse/HADOOP-13498 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: uncleGen >Assignee: uncleGen > Fix For: HADOOP-12756 > > Attachments: HADOOP-13498-HADOOP-12756.001.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13481) User end documents for Aliyun OSS FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-13481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15422528#comment-15422528 ] Hadoop QA commented on HADOOP-13481: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 55s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 21s{color} | {color:red} hadoop-tools/hadoop-aliyun in HADOOP-12756 has 8 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s{color} | {color:green} hadoop-aliyun in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 33s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12823850/HADOOP-13481-HADOOP-12756.002.patch | | JIRA Issue | HADOOP-13481 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 367b8f57a50c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-12756 / 8346f922 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/10260/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-aliyun-warnings.html | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10260/testReport/ | | modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10260/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > User end documents for Aliyun OSS FileSystem > > > Key: HADOOP-13481 > URL: https://issues.apache.org/jira/browse/HADOOP-13481 >
[jira] [Updated] (HADOOP-13491) fix several warnings from findbugs
[ https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] uncleGen updated HADOOP-13491: -- Attachment: HADOOP-13491-HADOOP-12756.003.patch > fix several warnings from findbugs > -- > > Key: HADOOP-13491 > URL: https://issues.apache.org/jira/browse/HADOOP-13491 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: uncleGen >Assignee: uncleGen > Fix For: HADOOP-12756 > > Attachments: HADOOP-13491-HADOOP-12756.001.patch, > HADOOP-13491-HADOOP-12756.002.patch, HADOOP-13491-HADOOP-12756.003.patch > > > {code:title=Bad practice Warnings|borderStyle=solid} > Code Warning > RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores > result of java.io.InputStream.skip(long) > Bug type SR_NOT_CHECKED (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) > Called method java.io.InputStream.skip(long) > At AliyunOSSInputStream.java:[line 235] > RR > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() > ignores result of java.io.FileInputStream.skip(long) > Bug type SR_NOT_CHECKED (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream > In method > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() > Called method java.io.FileInputStream.skip(long) > At AliyunOSSOutputStream.java:[line 177] > RVExceptional return value of java.io.File.delete() ignored in > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close() > Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream > In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close() > Called method java.io.File.delete() > At AliyunOSSOutputStream.java:[line 116] > {code} > {code:title=Multithreaded correctness Warnings|borderStyle=solid} > Code Warning > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked > 90% of time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining > Synchronized 90% of the time > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Synchronized access at AliyunOSSInputStream.java:[line 106] > Synchronized access at AliyunOSSInputStream.java:[line 168] > Synchronized access at AliyunOSSInputStream.java:[line 189] > Synchronized access at AliyunOSSInputStream.java:[line 188] > Synchronized access at AliyunOSSInputStream.java:[line 188] > Synchronized access at AliyunOSSInputStream.java:[line 190] > Synchronized access at AliyunOSSInputStream.java:[line 113] > Synchronized access at AliyunOSSInputStream.java:[line 131] > Synchronized access at AliyunOSSInputStream.java:[line 131] > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of > time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position > Synchronized 66% of the time > dUnsynchronized access at AliyunOSSInputStream.java:[line 232] > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Unsynchronized access at AliyunOSSInputStream.java:[line 235] > Unsynchronized access at AliyunOSSInputStream.java:[line 236] > Unsynchronized access at AliyunOSSInputStream.java:[line 245] > Synchronized access at AliyunOSSInputStream.java:[line 222] > Synchronized access at AliyunOSSInputStream.java:[line 105] > Synchronized access at AliyunOSSInputStream.java:[line 167] > Synchronized access at AliyunOSSInputStream.java:[line 169] > Synchronized access at AliyunOSSInputStream.java:[line 187] > Synchronized access at AliyunOSSInputStream.java:[line 187] > Synchronized access at AliyunOSSInputStream.java:[line 113] > Synchronized access at AliyunOSSInputStream.java:[line 114] > Synchronized access at AliyunOSSInputStream.java:[line 130] > Synchronized access at AliyunOSSInputStream.java:[line 130] > Synchronized access at AliyunOSSInputStream.java:[line 259] > Synchronized access at AliyunOSSInputStream.java:[line 266] > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream; locked > 85% of time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStre
[jira] [Updated] (HADOOP-13491) fix several warnings from findbugs
[ https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] uncleGen updated HADOOP-13491: -- Attachment: (was: HADOOP-13491-HADOOP-12756.003.patch) > fix several warnings from findbugs > -- > > Key: HADOOP-13491 > URL: https://issues.apache.org/jira/browse/HADOOP-13491 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: uncleGen >Assignee: uncleGen > Fix For: HADOOP-12756 > > Attachments: HADOOP-13491-HADOOP-12756.001.patch, > HADOOP-13491-HADOOP-12756.002.patch, HADOOP-13491-HADOOP-12756.003.patch > > > {code:title=Bad practice Warnings|borderStyle=solid} > Code Warning > RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores > result of java.io.InputStream.skip(long) > Bug type SR_NOT_CHECKED (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) > Called method java.io.InputStream.skip(long) > At AliyunOSSInputStream.java:[line 235] > RR > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() > ignores result of java.io.FileInputStream.skip(long) > Bug type SR_NOT_CHECKED (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream > In method > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() > Called method java.io.FileInputStream.skip(long) > At AliyunOSSOutputStream.java:[line 177] > RVExceptional return value of java.io.File.delete() ignored in > org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close() > Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream > In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close() > Called method java.io.File.delete() > At AliyunOSSOutputStream.java:[line 116] > {code} > {code:title=Multithreaded correctness Warnings|borderStyle=solid} > Code Warning > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked > 90% of time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining > Synchronized 90% of the time > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Synchronized access at AliyunOSSInputStream.java:[line 106] > Synchronized access at AliyunOSSInputStream.java:[line 168] > Synchronized access at AliyunOSSInputStream.java:[line 189] > Synchronized access at AliyunOSSInputStream.java:[line 188] > Synchronized access at AliyunOSSInputStream.java:[line 188] > Synchronized access at AliyunOSSInputStream.java:[line 190] > Synchronized access at AliyunOSSInputStream.java:[line 113] > Synchronized access at AliyunOSSInputStream.java:[line 131] > Synchronized access at AliyunOSSInputStream.java:[line 131] > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of > time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position > Synchronized 66% of the time > dUnsynchronized access at AliyunOSSInputStream.java:[line 232] > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Unsynchronized access at AliyunOSSInputStream.java:[line 234] > Unsynchronized access at AliyunOSSInputStream.java:[line 235] > Unsynchronized access at AliyunOSSInputStream.java:[line 236] > Unsynchronized access at AliyunOSSInputStream.java:[line 245] > Synchronized access at AliyunOSSInputStream.java:[line 222] > Synchronized access at AliyunOSSInputStream.java:[line 105] > Synchronized access at AliyunOSSInputStream.java:[line 167] > Synchronized access at AliyunOSSInputStream.java:[line 169] > Synchronized access at AliyunOSSInputStream.java:[line 187] > Synchronized access at AliyunOSSInputStream.java:[line 187] > Synchronized access at AliyunOSSInputStream.java:[line 113] > Synchronized access at AliyunOSSInputStream.java:[line 114] > Synchronized access at AliyunOSSInputStream.java:[line 130] > Synchronized access at AliyunOSSInputStream.java:[line 130] > Synchronized access at AliyunOSSInputStream.java:[line 259] > Synchronized access at AliyunOSSInputStream.java:[line 266] > ISInconsistent synchronization of > org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream; locked > 85% of time > Bug type IS2_INCONSISTENT_SYNC (click for details) > In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream > Field org.apache.hadoop.fs.aliyun.oss.AliyunO
[jira] [Commented] (HADOOP-13498) the number of multi-part upload part should not bigger than 10000
[ https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15422513#comment-15422513 ] Hadoop QA commented on HADOOP-13498: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 39s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} HADOOP-12756 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s{color} | {color:red} hadoop-tools/hadoop-aliyun in HADOOP-12756 has 8 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 9s{color} | {color:orange} hadoop-tools/hadoop-aliyun: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s{color} | {color:green} hadoop-aliyun in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 32s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12823849/HADOOP-13498-HADOOP-12756.001.patch | | JIRA Issue | HADOOP-13498 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 5b89a1ec0835 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-12756 / 8346f922 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/10259/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-aliyun-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10259/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aliyun.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10259/testReport/ | | modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10259/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > the number of multi-part upload part should not bigger than 1 > -
[jira] [Updated] (HADOOP-13481) User end documents for Aliyun OSS FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-13481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] uncleGen updated HADOOP-13481: -- Attachment: HADOOP-13481-HADOOP-12756.002.patch > User end documents for Aliyun OSS FileSystem > > > Key: HADOOP-13481 > URL: https://issues.apache.org/jira/browse/HADOOP-13481 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: uncleGen >Assignee: uncleGen >Priority: Minor > Fix For: HADOOP-12756 > > Attachments: HADOOP-13481-HADOOP-12756.001.patch, > HADOOP-13481-HADOOP-12756.002.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13481) User end documents for Aliyun OSS FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-13481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] uncleGen updated HADOOP-13481: -- Attachment: (was: HADOOP-13481-HADOOP-12756.002.patch) > User end documents for Aliyun OSS FileSystem > > > Key: HADOOP-13481 > URL: https://issues.apache.org/jira/browse/HADOOP-13481 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: uncleGen >Assignee: uncleGen >Priority: Minor > Fix For: HADOOP-12756 > > Attachments: HADOOP-13481-HADOOP-12756.001.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org