[jira] [Commented] (HADOOP-14909) Fix the word of "erasure encoding" in the top page
[ https://issues.apache.org/jira/browse/HADOOP-14909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182059#comment-16182059 ] Akira Ajisaka commented on HADOOP-14909: +1 > Fix the word of "erasure encoding" in the top page > -- > > Key: HADOOP-14909 > URL: https://issues.apache.org/jira/browse/HADOOP-14909 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Trivial > Attachments: HADOOP-14909.1.patch > > > Since "erasure coding" is a more general word, we should use it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14910) Upgrade netty-all jar to 4.0.37.Final
[ https://issues.apache.org/jira/browse/HADOOP-14910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B reassigned HADOOP-14910: -- Assignee: Vinayakumar B > Upgrade netty-all jar to 4.0.37.Final > - > > Key: HADOOP-14910 > URL: https://issues.apache.org/jira/browse/HADOOP-14910 > Project: Hadoop Common > Issue Type: Bug >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Critical > > Upgrade netty-all jar to 4.0.37.Final version to fix latest vulnerabilities > reported. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14910) Upgrade netty-all jar to 4.0.37.Final
Vinayakumar B created HADOOP-14910: -- Summary: Upgrade netty-all jar to 4.0.37.Final Key: HADOOP-14910 URL: https://issues.apache.org/jira/browse/HADOOP-14910 Project: Hadoop Common Issue Type: Bug Reporter: Vinayakumar B Priority: Critical Upgrade netty-all jar to 4.0.37.Final version to fix latest vulnerabilities reported. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14822) hadoop-project/pom.xml is executable
[ https://issues.apache.org/jira/browse/HADOOP-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-14822: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.0 3.0.0-beta1 2.9.0 Status: Resolved (was: Patch Available) Committed this to trunk, branch-3.0, and branch-2. Thank you, [~ajayydv]! > hadoop-project/pom.xml is executable > > > Key: HADOOP-14822 > URL: https://issues.apache.org/jira/browse/HADOOP-14822 > Project: Hadoop Common > Issue Type: Bug >Reporter: Akira Ajisaka >Assignee: Ajay Kumar >Priority: Minor > Labels: newbie > Fix For: 2.9.0, 3.0.0-beta1, 3.1.0 > > Attachments: HADOOP-14822.01.patch > > > No need for pom.xml to be executable. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14822) hadoop-project/pom.xml is executable
[ https://issues.apache.org/jira/browse/HADOOP-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181998#comment-16181998 ] Akira Ajisaka commented on HADOOP-14822: +1 > hadoop-project/pom.xml is executable > > > Key: HADOOP-14822 > URL: https://issues.apache.org/jira/browse/HADOOP-14822 > Project: Hadoop Common > Issue Type: Bug >Reporter: Akira Ajisaka >Assignee: Ajay Kumar >Priority: Minor > Labels: newbie > Attachments: HADOOP-14822.01.patch > > > No need for pom.xml to be executable. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14909) Fix the word of "erasure encoding" in the top page
[ https://issues.apache.org/jira/browse/HADOOP-14909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181976#comment-16181976 ] Hadoop QA commented on HADOOP-14909: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 22m 5s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 53s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 32m 58s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14909 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889221/HADOOP-14909.1.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 8b3b8bc8c578 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0da29cb | | modules | C: hadoop-project U: hadoop-project | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13387/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix the word of "erasure encoding" in the top page > -- > > Key: HADOOP-14909 > URL: https://issues.apache.org/jira/browse/HADOOP-14909 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Trivial > Attachments: HADOOP-14909.1.patch > > > Since "erasure coding" is a more general word, we should use it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
[ https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar reassigned HADOOP-14788: --- Assignee: Ajay Kumar > Credentials readTokenStorageFile to stop wrapping IOEs in IOEs > -- > > Key: HADOOP-14788 > URL: https://issues.apache.org/jira/browse/HADOOP-14788 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Ajay Kumar >Priority: Minor > > When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps > with the filename, so losing the exception class information. > Is this needed. or can it pass everything up? > If it is needed, well, it's a common pattern: wrapping the exception with the > path & operation. Maybe it's time to add an IOE version of > {{NetworkUtils.wrapException()}} which handles the broader set of IOEs -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14909) Fix the word of "erasure encoding" in the top page
[ https://issues.apache.org/jira/browse/HADOOP-14909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HADOOP-14909: -- Attachment: HADOOP-14909.1.patch Uploaded the 1st patch. > Fix the word of "erasure encoding" in the top page > -- > > Key: HADOOP-14909 > URL: https://issues.apache.org/jira/browse/HADOOP-14909 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Trivial > Attachments: HADOOP-14909.1.patch > > > Since "erasure coding" is a more general word, we should use it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14909) Fix the word of "erasure encoding" in the top page
[ https://issues.apache.org/jira/browse/HADOOP-14909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HADOOP-14909: -- Status: Patch Available (was: Open) > Fix the word of "erasure encoding" in the top page > -- > > Key: HADOOP-14909 > URL: https://issues.apache.org/jira/browse/HADOOP-14909 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Trivial > Attachments: HADOOP-14909.1.patch > > > Since "erasure coding" is a more general word, we should use it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14908) CrossOriginFilter should trigger regex on more input
[ https://issues.apache.org/jira/browse/HADOOP-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181947#comment-16181947 ] Hadoop QA commented on HADOOP-14908: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 39s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 13s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 37s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 20 new + 21 unchanged - 0 fixed = 41 total (was 21) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 43s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 11s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 78m 20s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14908 | | GITHUB PR | https://github.com/apache/hadoop/pull/278 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c9584f794005 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0da29cb | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/13385/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/13385/artifact/patchprocess/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13385/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output |
[jira] [Commented] (HADOOP-14822) hadoop-project/pom.xml is executable
[ https://issues.apache.org/jira/browse/HADOOP-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181948#comment-16181948 ] Hadoop QA commented on HADOOP-14822: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 20m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 44s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 6s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 30m 28s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14822 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889214/HADOOP-14822.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 5a83b1e616c2 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0da29cb | | Default Java | 1.8.0_144 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13386/testReport/ | | modules | C: hadoop-project U: hadoop-project | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13386/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > hadoop-project/pom.xml is executable > > > Key: HADOOP-14822 > URL: https://issues.apache.org/jira/browse/HADOOP-14822 > Project: Hadoop Common > Issue Type: Bug >Reporter: Akira Ajisaka >Assignee: Ajay Kumar >Priority: Minor > Labels: newbie > Attachments: HADOOP-14822.01.patch > > > No need for pom.xml to be executable. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe,
[jira] [Created] (HADOOP-14909) Fix the word of "erasure encoding" in the top page
Takanobu Asanuma created HADOOP-14909: - Summary: Fix the word of "erasure encoding" in the top page Key: HADOOP-14909 URL: https://issues.apache.org/jira/browse/HADOOP-14909 Project: Hadoop Common Issue Type: Improvement Components: documentation Reporter: Takanobu Asanuma Assignee: Takanobu Asanuma Priority: Trivial Since "erasure coding" is a more general word, we should use it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14822) hadoop-project/pom.xml is executable
[ https://issues.apache.org/jira/browse/HADOOP-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-14822: Status: Patch Available (was: Open) > hadoop-project/pom.xml is executable > > > Key: HADOOP-14822 > URL: https://issues.apache.org/jira/browse/HADOOP-14822 > Project: Hadoop Common > Issue Type: Bug >Reporter: Akira Ajisaka >Assignee: Ajay Kumar >Priority: Minor > Labels: newbie > Attachments: HADOOP-14822.01.patch > > > No need for pom.xml to be executable. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14822) hadoop-project/pom.xml is executable
[ https://issues.apache.org/jira/browse/HADOOP-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-14822: Attachment: HADOOP-14822.01.patch > hadoop-project/pom.xml is executable > > > Key: HADOOP-14822 > URL: https://issues.apache.org/jira/browse/HADOOP-14822 > Project: Hadoop Common > Issue Type: Bug >Reporter: Akira Ajisaka >Assignee: Ajay Kumar >Priority: Minor > Labels: newbie > Attachments: HADOOP-14822.01.patch > > > No need for pom.xml to be executable. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14822) hadoop-project/pom.xml is executable
[ https://issues.apache.org/jira/browse/HADOOP-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar reassigned HADOOP-14822: --- Assignee: Ajay Kumar > hadoop-project/pom.xml is executable > > > Key: HADOOP-14822 > URL: https://issues.apache.org/jira/browse/HADOOP-14822 > Project: Hadoop Common > Issue Type: Bug >Reporter: Akira Ajisaka >Assignee: Ajay Kumar >Priority: Minor > Labels: newbie > > No need for pom.xml to be executable. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14908) CrossOriginFilter should trigger regex on more input
[ https://issues.apache.org/jira/browse/HADOOP-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer reassigned HADOOP-14908: - Assignee: Johannes Alberti > CrossOriginFilter should trigger regex on more input > > > Key: HADOOP-14908 > URL: https://issues.apache.org/jira/browse/HADOOP-14908 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer >Assignee: Johannes Alberti > > Currently, CrossOriginFilter.java limits regex matching only if there is an > asterisk (\*) in the config. > {code} > if (allowedOrigin.contains("*")) { > {code} > This means that entries such as: > {code} > http?://foo.example.com > https://[a-z][0-9].example.com > {code} > ... and other patterns that succinctly limit the input space need to either > be fully expanded or dramatically have their space increased by using an > asterisk in order to pass through the filter. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14908) CrossOriginFilter should trigger regex on more input
[ https://issues.apache.org/jira/browse/HADOOP-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer reassigned HADOOP-14908: - Assignee: (was: Allen Wittenauer) > CrossOriginFilter should trigger regex on more input > > > Key: HADOOP-14908 > URL: https://issues.apache.org/jira/browse/HADOOP-14908 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer > > Currently, CrossOriginFilter.java limits regex matching only if there is an > asterisk (\*) in the config. > {code} > if (allowedOrigin.contains("*")) { > {code} > This means that entries such as: > {code} > http?://foo.example.com > https://[a-z][0-9].example.com > {code} > ... and other patterns that succinctly limit the input space need to either > be fully expanded or dramatically have their space increased by using an > asterisk in order to pass through the filter. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14908) CrossOriginFilter should trigger regex on more input
[ https://issues.apache.org/jira/browse/HADOOP-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-14908: -- Status: Patch Available (was: Open) > CrossOriginFilter should trigger regex on more input > > > Key: HADOOP-14908 > URL: https://issues.apache.org/jira/browse/HADOOP-14908 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer > > Currently, CrossOriginFilter.java limits regex matching only if there is an > asterisk (\*) in the config. > {code} > if (allowedOrigin.contains("*")) { > {code} > This means that entries such as: > {code} > http?://foo.example.com > https://[a-z][0-9].example.com > {code} > ... and other patterns that succinctly limit the input space need to either > be fully expanded or dramatically have their space increased by using an > asterisk in order to pass through the filter. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14908) CrossOriginFilter should trigger regex on more input
[ https://issues.apache.org/jira/browse/HADOOP-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer reassigned HADOOP-14908: - Assignee: Allen Wittenauer > CrossOriginFilter should trigger regex on more input > > > Key: HADOOP-14908 > URL: https://issues.apache.org/jira/browse/HADOOP-14908 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer > > Currently, CrossOriginFilter.java limits regex matching only if there is an > asterisk (\*) in the config. > {code} > if (allowedOrigin.contains("*")) { > {code} > This means that entries such as: > {code} > http?://foo.example.com > https://[a-z][0-9].example.com > {code} > ... and other patterns that succinctly limit the input space need to either > be fully expanded or dramatically have their space increased by using an > asterisk in order to pass through the filter. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14908) CrossOriginFilter should trigger regex on more input
[ https://issues.apache.org/jira/browse/HADOOP-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181871#comment-16181871 ] Johannes Alberti commented on HADOOP-14908: --- A proposed patch is here https://github.com/apache/hadoop/pull/278 > CrossOriginFilter should trigger regex on more input > > > Key: HADOOP-14908 > URL: https://issues.apache.org/jira/browse/HADOOP-14908 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer > > Currently, CrossOriginFilter.java limits regex matching only if there is an > asterisk (\*) in the config. > {code} > if (allowedOrigin.contains("*")) { > {code} > This means that entries such as: > {code} > http?://foo.example.com > https://[a-z][0-9].example.com > {code} > ... and other patterns that succinctly limit the input space need to either > be fully expanded or dramatically have their space increased by using an > asterisk in order to pass through the filter. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-14908) CrossOriginFilter should trigger regex on more input
[ https://issues.apache.org/jira/browse/HADOOP-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Johannes Alberti updated HADOOP-14908: -- Comment: was deleted (was: A proposed patch is here https://github.com/apache/hadoop/pull/278) > CrossOriginFilter should trigger regex on more input > > > Key: HADOOP-14908 > URL: https://issues.apache.org/jira/browse/HADOOP-14908 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer > > Currently, CrossOriginFilter.java limits regex matching only if there is an > asterisk (\*) in the config. > {code} > if (allowedOrigin.contains("*")) { > {code} > This means that entries such as: > {code} > http?://foo.example.com > https://[a-z][0-9].example.com > {code} > ... and other patterns that succinctly limit the input space need to either > be fully expanded or dramatically have their space increased by using an > asterisk in order to pass through the filter. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-14908) CrossOriginFilter should trigger regex on more input
[ https://issues.apache.org/jira/browse/HADOOP-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Johannes Alberti updated HADOOP-14908: -- Comment: was deleted (was: A proposed patch is here https://github.com/apache/hadoop/pull/278) > CrossOriginFilter should trigger regex on more input > > > Key: HADOOP-14908 > URL: https://issues.apache.org/jira/browse/HADOOP-14908 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer > > Currently, CrossOriginFilter.java limits regex matching only if there is an > asterisk (\*) in the config. > {code} > if (allowedOrigin.contains("*")) { > {code} > This means that entries such as: > {code} > http?://foo.example.com > https://[a-z][0-9].example.com > {code} > ... and other patterns that succinctly limit the input space need to either > be fully expanded or dramatically have their space increased by using an > asterisk in order to pass through the filter. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14908) CrossOriginFilter should trigger regex on more input
[ https://issues.apache.org/jira/browse/HADOOP-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181872#comment-16181872 ] Johannes Alberti commented on HADOOP-14908: --- A proposed patch is here https://github.com/apache/hadoop/pull/278 > CrossOriginFilter should trigger regex on more input > > > Key: HADOOP-14908 > URL: https://issues.apache.org/jira/browse/HADOOP-14908 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer > > Currently, CrossOriginFilter.java limits regex matching only if there is an > asterisk (\*) in the config. > {code} > if (allowedOrigin.contains("*")) { > {code} > This means that entries such as: > {code} > http?://foo.example.com > https://[a-z][0-9].example.com > {code} > ... and other patterns that succinctly limit the input space need to either > be fully expanded or dramatically have their space increased by using an > asterisk in order to pass through the filter. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14908) CrossOriginFilter should trigger regex on more input
[ https://issues.apache.org/jira/browse/HADOOP-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181867#comment-16181867 ] ASF GitHub Bot commented on HADOOP-14908: - GitHub user johannes-altiscale opened a pull request: https://github.com/apache/hadoop/pull/278 (HADOOP-14908) allow for real regex patterns (and be backward compatible) You can merge this pull request into a Git repository by running: $ git pull https://github.com/Altiscale/hadoop johannes-HADOOP-14908-allow-full-regexp Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hadoop/pull/278.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #278 commit 90e3816f606b495efa09cfb0f26c3a6d37ac Author: Johannes AlbertiDate: 2017-09-27T01:50:10Z allow for real regex patterns (and be backward compatible) > CrossOriginFilter should trigger regex on more input > > > Key: HADOOP-14908 > URL: https://issues.apache.org/jira/browse/HADOOP-14908 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer > > Currently, CrossOriginFilter.java limits regex matching only if there is an > asterisk (\*) in the config. > {code} > if (allowedOrigin.contains("*")) { > {code} > This means that entries such as: > {code} > http?://foo.example.com > https://[a-z][0-9].example.com > {code} > ... and other patterns that succinctly limit the input space need to either > be fully expanded or dramatically have their space increased by using an > asterisk in order to pass through the filter. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14277) TestTrash.testTrashRestarts is flaky
[ https://issues.apache.org/jira/browse/HADOOP-14277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181808#comment-16181808 ] Hadoop QA commented on HADOOP-14277: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 14s{color} | {color:red} HADOOP-14277 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-14277 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12863097/HADOOP-14277.003.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13384/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestTrash.testTrashRestarts is flaky > > > Key: HADOOP-14277 > URL: https://issues.apache.org/jira/browse/HADOOP-14277 > Project: Hadoop Common > Issue Type: Bug >Reporter: Eric Badger >Assignee: Weiwei Yang > Labels: flaky-test > Attachments: HADOOP-14277.001.patch, HADOOP-14277.002.patch, > HADOOP-14277.003.patch > > > {noformat} > junit.framework.AssertionFailedError: Expected num of checkpoints is 2, but > actual is 3 expected:<2> but was:<3> > at junit.framework.Assert.fail(Assert.java:57) > at junit.framework.Assert.failNotEquals(Assert.java:329) > at junit.framework.Assert.assertEquals(Assert.java:78) > at junit.framework.Assert.assertEquals(Assert.java:234) > at junit.framework.TestCase.assertEquals(TestCase.java:401) > at > org.apache.hadoop.fs.TestTrash.verifyAuditableTrashEmptier(TestTrash.java:892) > at org.apache.hadoop.fs.TestTrash.testTrashRestarts(TestTrash.java:593) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14277) TestTrash.testTrashRestarts is flaky
[ https://issues.apache.org/jira/browse/HADOOP-14277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14277: - Labels: flaky-test (was: ) > TestTrash.testTrashRestarts is flaky > > > Key: HADOOP-14277 > URL: https://issues.apache.org/jira/browse/HADOOP-14277 > Project: Hadoop Common > Issue Type: Bug >Reporter: Eric Badger >Assignee: Weiwei Yang > Labels: flaky-test > Attachments: HADOOP-14277.001.patch, HADOOP-14277.002.patch, > HADOOP-14277.003.patch > > > {noformat} > junit.framework.AssertionFailedError: Expected num of checkpoints is 2, but > actual is 3 expected:<2> but was:<3> > at junit.framework.Assert.fail(Assert.java:57) > at junit.framework.Assert.failNotEquals(Assert.java:329) > at junit.framework.Assert.assertEquals(Assert.java:78) > at junit.framework.Assert.assertEquals(Assert.java:234) > at junit.framework.TestCase.assertEquals(TestCase.java:401) > at > org.apache.hadoop.fs.TestTrash.verifyAuditableTrashEmptier(TestTrash.java:892) > at org.apache.hadoop.fs.TestTrash.testTrashRestarts(TestTrash.java:593) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14902) LoadGenerator#genFile write close timing is incorrectly calculated
[ https://issues.apache.org/jira/browse/HADOOP-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14902: Attachment: HADOOP-14902.002.patch Thanks for the review, [~jlowe]. I have updated the patch. > LoadGenerator#genFile write close timing is incorrectly calculated > -- > > Key: HADOOP-14902 > URL: https://issues.apache.org/jira/browse/HADOOP-14902 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.4.0 >Reporter: Jason Lowe >Assignee: Hanisha Koneru > Attachments: HADOOP-14902.001.patch, HADOOP-14902.002.patch > > > LoadGenerator#genFile's write close timing code looks like the following: > {code} > startTime = Time.now(); > executionTime[WRITE_CLOSE] += (Time.now() - startTime); > {code} > That code will generate a zero (or near zero) write close timing since it > isn't actually closing the file in-between timestamp lookups. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14902) LoadGenerator#genFile write close timing is incorrectly calculated
[ https://issues.apache.org/jira/browse/HADOOP-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181663#comment-16181663 ] Jason Lowe commented on HADOOP-14902: - Thanks for the patch! Since genFile already throws IOExceptions for write errors, it seems incorrect to suppress errors encountered during close. IMHO they should be treated the same, otherwise callers of genFIle may believe the file was written properly when it wasn't. Therefore I think we can simplify it a bit where we don't need a nested try block. All we need to do is track whether the file was closed within the try block and have the finally block close it if necessary with a straight out.close(). The exception can propagate out just as it would for a write error. > LoadGenerator#genFile write close timing is incorrectly calculated > -- > > Key: HADOOP-14902 > URL: https://issues.apache.org/jira/browse/HADOOP-14902 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.4.0 >Reporter: Jason Lowe >Assignee: Hanisha Koneru > Attachments: HADOOP-14902.001.patch > > > LoadGenerator#genFile's write close timing code looks like the following: > {code} > startTime = Time.now(); > executionTime[WRITE_CLOSE] += (Time.now() - startTime); > {code} > That code will generate a zero (or near zero) write close timing since it > isn't actually closing the file in-between timestamp lookups. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14902) LoadGenerator#genFile write close timing is incorrectly calculated
[ https://issues.apache.org/jira/browse/HADOOP-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181635#comment-16181635 ] Hadoop QA commented on HADOOP-14902: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 6s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 50s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 24s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 87m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14902 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889141/HADOOP-14902.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ea4fc528bfce 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9df0500 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13383/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13383/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > LoadGenerator#genFile write close timing is incorrectly calculated > -- > > Key: HADOOP-14902 > URL: https://issues.apache.org/jira/browse/HADOOP-14902 > Project: Hadoop
[jira] [Updated] (HADOOP-13917) Ensure yetus personality runs the integration tests for the shaded client
[ https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-13917: - Summary: Ensure yetus personality runs the integration tests for the shaded client (was: Ensure nightly builds run the integration tests for the shaded client) > Ensure yetus personality runs the integration tests for the shaded client > - > > Key: HADOOP-13917 > URL: https://issues.apache.org/jira/browse/HADOOP-13917 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, test >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Critical > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-13917.WIP.0.patch, HADOOP-14771.02.patch > > > Either QBT or a different jenkins job should run our integration tests, > specifically the ones added for the shaded client. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13917) Ensure nightly builds run the integration tests for the shaded client
[ https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-13917: - Resolution: Fixed Fix Version/s: 3.0.0-beta1 Status: Resolved (was: Patch Available) filed YETUS-550 to add the log > Ensure nightly builds run the integration tests for the shaded client > - > > Key: HADOOP-13917 > URL: https://issues.apache.org/jira/browse/HADOOP-13917 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, test >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Critical > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-13917.WIP.0.patch, HADOOP-14771.02.patch > > > Either QBT or a different jenkins job should run our integration tests, > specifically the ones added for the shaded client. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14908) CrossOriginFilter should trigger regex on more input
[ https://issues.apache.org/jira/browse/HADOOP-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181541#comment-16181541 ] Allen Wittenauer commented on HADOOP-14908: --- There are likely a bunch of ways to solve this one. Off the top, I can think of three: #1: always treat it as a regex This is backwards incompatible, in the sense that periods are now wildcards and opens up the namespace on existing installations. #2: Add additional triggers It might simpler to just check for ? and [, but this will prevent character classes, boundary matches, and other "exotics" from being used. #3: flag/config that says whether everything/always/etc should be used as a regex. Personally, I'm leaning towards #1. > CrossOriginFilter should trigger regex on more input > > > Key: HADOOP-14908 > URL: https://issues.apache.org/jira/browse/HADOOP-14908 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer > > Currently, CrossOriginFilter.java limits regex matching only if there is an > asterisk (\*) in the config. > {code} > if (allowedOrigin.contains("*")) { > {code} > This means that entries such as: > {code} > http?://foo.example.com > https://[a-z][0-9].example.com > {code} > ... and other patterns that succinctly limit the input space need to either > be fully expanded or dramatically have their space increased by using an > asterisk in order to pass through the filter. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14908) CrossOriginFilter should trigger regex on more input
Allen Wittenauer created HADOOP-14908: - Summary: CrossOriginFilter should trigger regex on more input Key: HADOOP-14908 URL: https://issues.apache.org/jira/browse/HADOOP-14908 Project: Hadoop Common Issue Type: Bug Components: common, security Affects Versions: 3.0.0-beta1 Reporter: Allen Wittenauer Currently, CrossOriginFilter.java limits regex matching only if there is an asterisk (*) in the config. {code} if (allowedOrigin.contains("*")) { {code} This means that entries such as: {code} http?://foo.example.com https://[a-z][0-9].example.com {code} ... and other patterns that succinctly limit the input space need to either be fully expanded or dramatically have their space increased by using an asterisk in order to pass through the filter. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14908) CrossOriginFilter should trigger regex on more input
[ https://issues.apache.org/jira/browse/HADOOP-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-14908: -- Issue Type: Improvement (was: Bug) > CrossOriginFilter should trigger regex on more input > > > Key: HADOOP-14908 > URL: https://issues.apache.org/jira/browse/HADOOP-14908 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer > > Currently, CrossOriginFilter.java limits regex matching only if there is an > asterisk (*) in the config. > {code} > if (allowedOrigin.contains("*")) { > {code} > This means that entries such as: > {code} > http?://foo.example.com > https://[a-z][0-9].example.com > {code} > ... and other patterns that succinctly limit the input space need to either > be fully expanded or dramatically have their space increased by using an > asterisk in order to pass through the filter. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14908) CrossOriginFilter should trigger regex on more input
[ https://issues.apache.org/jira/browse/HADOOP-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-14908: -- Description: Currently, CrossOriginFilter.java limits regex matching only if there is an asterisk (\*) in the config. {code} if (allowedOrigin.contains("*")) { {code} This means that entries such as: {code} http?://foo.example.com https://[a-z][0-9].example.com {code} ... and other patterns that succinctly limit the input space need to either be fully expanded or dramatically have their space increased by using an asterisk in order to pass through the filter. was: Currently, CrossOriginFilter.java limits regex matching only if there is an asterisk (*) in the config. {code} if (allowedOrigin.contains("*")) { {code} This means that entries such as: {code} http?://foo.example.com https://[a-z][0-9].example.com {code} ... and other patterns that succinctly limit the input space need to either be fully expanded or dramatically have their space increased by using an asterisk in order to pass through the filter. > CrossOriginFilter should trigger regex on more input > > > Key: HADOOP-14908 > URL: https://issues.apache.org/jira/browse/HADOOP-14908 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer > > Currently, CrossOriginFilter.java limits regex matching only if there is an > asterisk (\*) in the config. > {code} > if (allowedOrigin.contains("*")) { > {code} > This means that entries such as: > {code} > http?://foo.example.com > https://[a-z][0-9].example.com > {code} > ... and other patterns that succinctly limit the input space need to either > be fully expanded or dramatically have their space increased by using an > asterisk in order to pass through the filter. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13917) Ensure nightly builds run the integration tests for the shaded client
[ https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181503#comment-16181503 ] Andrew Wang commented on HADOOP-13917: -- Yea, I think we're good to resolve. One nice-to-have enhancement would be to link to the patch-shadedclient.txt file in the Report/Notes field, otherwise people have to dig it out of the Jenkins artifacts. > Ensure nightly builds run the integration tests for the shaded client > - > > Key: HADOOP-13917 > URL: https://issues.apache.org/jira/browse/HADOOP-13917 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, test >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-13917.WIP.0.patch, HADOOP-14771.02.patch > > > Either QBT or a different jenkins job should run our integration tests, > specifically the ones added for the shaded client. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14902) LoadGenerator#genFile write close timing is incorrectly calculated
[ https://issues.apache.org/jira/browse/HADOOP-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14902: Status: Patch Available (was: Open) > LoadGenerator#genFile write close timing is incorrectly calculated > -- > > Key: HADOOP-14902 > URL: https://issues.apache.org/jira/browse/HADOOP-14902 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.4.0 >Reporter: Jason Lowe >Assignee: Hanisha Koneru > Attachments: HADOOP-14902.001.patch > > > LoadGenerator#genFile's write close timing code looks like the following: > {code} > startTime = Time.now(); > executionTime[WRITE_CLOSE] += (Time.now() - startTime); > {code} > That code will generate a zero (or near zero) write close timing since it > isn't actually closing the file in-between timestamp lookups. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14902) LoadGenerator#genFile write close timing is incorrectly calculated
[ https://issues.apache.org/jira/browse/HADOOP-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14902: Attachment: HADOOP-14902.001.patch Attached a patch which would attempt to _close_ the OutputStream and add the close time to metrics only if the _close_ is successful. > LoadGenerator#genFile write close timing is incorrectly calculated > -- > > Key: HADOOP-14902 > URL: https://issues.apache.org/jira/browse/HADOOP-14902 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.4.0 >Reporter: Jason Lowe >Assignee: Hanisha Koneru > Attachments: HADOOP-14902.001.patch > > > LoadGenerator#genFile's write close timing code looks like the following: > {code} > startTime = Time.now(); > executionTime[WRITE_CLOSE] += (Time.now() - startTime); > {code} > That code will generate a zero (or near zero) write close timing since it > isn't actually closing the file in-between timestamp lookups. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13917) Ensure nightly builds run the integration tests for the shaded client
[ https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181459#comment-16181459 ] Sean Busbey commented on HADOOP-13917: -- failed as expected. I also started a rerun of HADOOP-14771 to check the current patch and it passed as expected. think we're good? > Ensure nightly builds run the integration tests for the shaded client > - > > Key: HADOOP-13917 > URL: https://issues.apache.org/jira/browse/HADOOP-13917 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, test >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-13917.WIP.0.patch, HADOOP-14771.02.patch > > > Either QBT or a different jenkins job should run our integration tests, > specifically the ones added for the shaded client. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14771) hadoop-client does not include hadoop-yarn-client
[ https://issues.apache.org/jira/browse/HADOOP-14771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181453#comment-16181453 ] Hadoop QA commented on HADOOP-14771: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 22m 44s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 22s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s{color} | {color:green} hadoop-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 34m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14771 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888347/HADOOP-14771.04.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux c0290fed4bb5 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1267ff2 | | Default Java | 1.8.0_144 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13382/testReport/ | | modules | C: hadoop-client-modules/hadoop-client U: hadoop-client-modules/hadoop-client | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13382/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > hadoop-client does not include hadoop-yarn-client > - > > Key: HADOOP-14771 > URL: https://issues.apache.org/jira/browse/HADOOP-14771 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Reporter: Haibo Chen >Assignee: Ajay Kumar >Priority: Critical > Attachments: HADOOP-14771.01.patch, HADOOP-14771.02.patch, > HADOOP-14771.03.patch, HADOOP-14771.04.patch > > > The hadoop-client does not include
[jira] [Commented] (HADOOP-13917) Ensure nightly builds run the integration tests for the shaded client
[ https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181403#comment-16181403 ] Hadoop QA commented on HADOOP-13917: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 22m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 9m 0s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s{color} | {color:green} hadoop-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 33m 5s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-13917 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889127/HADOOP-14771.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 628bfda0040f 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1267ff2 | | Default Java | 1.8.0_144 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13381/testReport/ | | modules | C: hadoop-client-modules/hadoop-client U: hadoop-client-modules/hadoop-client | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13381/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ensure nightly builds run the integration tests for the shaded client > - > > Key: HADOOP-13917 > URL: https://issues.apache.org/jira/browse/HADOOP-13917 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, test >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-13917.WIP.0.patch, HADOOP-14771.02.patch > > > Either QBT or a
[jira] [Commented] (HADOOP-14893) WritableRpcEngine should use Time.monotonicNow
[ https://issues.apache.org/jira/browse/HADOOP-14893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181401#comment-16181401 ] Chetna Chaudhari commented on HADOOP-14893: --- Thanks [~ajisakaa] for reviewing and committing this patch. > WritableRpcEngine should use Time.monotonicNow > -- > > Key: HADOOP-14893 > URL: https://issues.apache.org/jira/browse/HADOOP-14893 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Chetna Chaudhari >Assignee: Chetna Chaudhari >Priority: Minor > Fix For: 2.9.0, 3.0.0-beta1, 2.8.3, 3.1.0 > > Attachments: HADOOP-14893-2.patch, HADOOP-14893.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14890) Move up to AWS SDK 1.11.199
[ https://issues.apache.org/jira/browse/HADOOP-14890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14890: Resolution: Fixed Fix Version/s: 2.9.0 Status: Resolved (was: Patch Available) committed to branch-2 as well. > Move up to AWS SDK 1.11.199 > --- > > Key: HADOOP-14890 > URL: https://issues.apache.org/jira/browse/HADOOP-14890 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Fix For: 2.9.0, 3.0.0 > > Attachments: HADOOP-14890-001.patch, HADOOP-14890-branch-2-002.patch > > > the AWS SDK in Hadoop 3.0.-beta-1 prints a warning whenever you call abort() > on a stream, which is what we need to do whenever doing long-distance seeks > in a large file opened with fadvise=normal > {code} > 2017-09-20 17:51:50,459 [ScalaTest-main-running-S3ASeekReadSuite] INFO > s3.S3ASeekReadSuite (Logging.scala:logInfo(54)) - > 2017-09-20 17:51:50,460 [ScalaTest-main-running-S3ASeekReadSuite] INFO > s3.S3ASeekReadSuite (Logging.scala:logInfo(54)) - Starting read() [pos = > 45603305] > 2017-09-20 17:51:50,461 [ScalaTest-main-running-S3ASeekReadSuite] WARN > internal.S3AbortableInputStream (S3AbortableInputStream.java:close(163)) - > Not all bytes were read from the S3ObjectInputStream, aborting HTTP > connection. This is likely an error and may result in sub-optimal behavior. > Request only the bytes you need via a ranged GET or drain the input stream > after use. > 2017-09-20 17:51:51,263 [ScalaTest-main-running-S3ASeekReadSuite] INFO > s3.S3ASeekReadSuite (Logging.scala:logInfo(54)) - Duration of read() [pos = > 45603305] = 803,650,637 nS > {code} > This goes away if we upgrade to the latest SDK, at least for the > non-localdynamo bits -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14890) Move up to AWS SDK 1.11.199
[ https://issues.apache.org/jira/browse/HADOOP-14890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181324#comment-16181324 ] Hadoop QA commented on HADOOP-14890: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 47s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 8s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 19s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s{color} | {color:green} branch-2 passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 35s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 52m 31s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:eaf5c66 | | JIRA Issue | HADOOP-14890 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889114/HADOOP-14890-branch-2-002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 03eeac6a2289 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | branch-2 / c4de765 | | Default Java | 1.7.0_151 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13380/testReport/ |
[jira] [Updated] (HADOOP-13917) Ensure nightly builds run the integration tests for the shaded client
[ https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-13917: - Attachment: HADOOP-14771.02.patch > Ensure nightly builds run the integration tests for the shaded client > - > > Key: HADOOP-13917 > URL: https://issues.apache.org/jira/browse/HADOOP-13917 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, test >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-13917.WIP.0.patch, HADOOP-14771.02.patch > > > Either QBT or a different jenkins job should run our integration tests, > specifically the ones added for the shaded client. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13917) Ensure nightly builds run the integration tests for the shaded client
[ https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181318#comment-16181318 ] Sean Busbey commented on HADOOP-13917: -- sure. lemme post that. > Ensure nightly builds run the integration tests for the shaded client > - > > Key: HADOOP-13917 > URL: https://issues.apache.org/jira/browse/HADOOP-13917 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, test >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-13917.WIP.0.patch > > > Either QBT or a different jenkins job should run our integration tests, > specifically the ones added for the shaded client. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13917) Ensure nightly builds run the integration tests for the shaded client
[ https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181282#comment-16181282 ] Andrew Wang commented on HADOOP-13917: -- I see a shadedclient run! Should we also test a failure case? I think we have one from the old HADOOP-14771 patch. > Ensure nightly builds run the integration tests for the shaded client > - > > Key: HADOOP-13917 > URL: https://issues.apache.org/jira/browse/HADOOP-13917 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, test >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-13917.WIP.0.patch > > > Either QBT or a different jenkins job should run our integration tests, > specifically the ones added for the shaded client. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14890) Move up to AWS SDK 1.11.199
[ https://issues.apache.org/jira/browse/HADOOP-14890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181272#comment-16181272 ] Aaron Fabbri commented on HADOOP-14890: --- +1 on branch-2 patch, LGTM. > Move up to AWS SDK 1.11.199 > --- > > Key: HADOOP-14890 > URL: https://issues.apache.org/jira/browse/HADOOP-14890 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Fix For: 3.0.0 > > Attachments: HADOOP-14890-001.patch, HADOOP-14890-branch-2-002.patch > > > the AWS SDK in Hadoop 3.0.-beta-1 prints a warning whenever you call abort() > on a stream, which is what we need to do whenever doing long-distance seeks > in a large file opened with fadvise=normal > {code} > 2017-09-20 17:51:50,459 [ScalaTest-main-running-S3ASeekReadSuite] INFO > s3.S3ASeekReadSuite (Logging.scala:logInfo(54)) - > 2017-09-20 17:51:50,460 [ScalaTest-main-running-S3ASeekReadSuite] INFO > s3.S3ASeekReadSuite (Logging.scala:logInfo(54)) - Starting read() [pos = > 45603305] > 2017-09-20 17:51:50,461 [ScalaTest-main-running-S3ASeekReadSuite] WARN > internal.S3AbortableInputStream (S3AbortableInputStream.java:close(163)) - > Not all bytes were read from the S3ObjectInputStream, aborting HTTP > connection. This is likely an error and may result in sub-optimal behavior. > Request only the bytes you need via a ranged GET or drain the input stream > after use. > 2017-09-20 17:51:51,263 [ScalaTest-main-running-S3ASeekReadSuite] INFO > s3.S3ASeekReadSuite (Logging.scala:logInfo(54)) - Duration of read() [pos = > 45603305] = 803,650,637 nS > {code} > This goes away if we upgrade to the latest SDK, at least for the > non-localdynamo bits -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14901) ReuseObjectMapper in Hadoop Common
[ https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181256#comment-16181256 ] Anu Engineer commented on HADOOP-14901: --- [~xkrogen] Thanks for the reminder, fixed. > ReuseObjectMapper in Hadoop Common > -- > > Key: HADOOP-14901 > URL: https://issues.apache.org/jira/browse/HADOOP-14901 > Project: Hadoop Common > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Minor > Fix For: 2.8.2, 3.1.0 > > Attachments: HADOOP-14901.001.patch, HADOOP-14901-branch-2.001.patch, > HADOOP-14901-branch-2.002.patch > > > It is recommended to reuse ObjectMapper, if possible, for better performance. > We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in > some places: they are straightforward and thread safe. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14901) ReuseObjectMapper in Hadoop Common
[ https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HADOOP-14901: -- Fix Version/s: 3.1.0 2.8.2 > ReuseObjectMapper in Hadoop Common > -- > > Key: HADOOP-14901 > URL: https://issues.apache.org/jira/browse/HADOOP-14901 > Project: Hadoop Common > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Minor > Fix For: 2.8.2, 3.1.0 > > Attachments: HADOOP-14901.001.patch, HADOOP-14901-branch-2.001.patch, > HADOOP-14901-branch-2.002.patch > > > It is recommended to reuse ObjectMapper, if possible, for better performance. > We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in > some places: they are straightforward and thread safe. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14901) ReuseObjectMapper in Hadoop Common
[ https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181239#comment-16181239 ] Erik Krogen commented on HADOOP-14901: -- Hey [~anu] can you set the fix versions? > ReuseObjectMapper in Hadoop Common > -- > > Key: HADOOP-14901 > URL: https://issues.apache.org/jira/browse/HADOOP-14901 > Project: Hadoop Common > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Minor > Attachments: HADOOP-14901.001.patch, HADOOP-14901-branch-2.001.patch, > HADOOP-14901-branch-2.002.patch > > > It is recommended to reuse ObjectMapper, if possible, for better performance. > We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in > some places: they are straightforward and thread safe. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14890) Move up to AWS SDK 1.11.199
[ https://issues.apache.org/jira/browse/HADOOP-14890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14890: Target Version/s: 2.9.0, 3.0.0 (was: 3.0.0) Status: Patch Available (was: Reopened) > Move up to AWS SDK 1.11.199 > --- > > Key: HADOOP-14890 > URL: https://issues.apache.org/jira/browse/HADOOP-14890 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Fix For: 3.0.0 > > Attachments: HADOOP-14890-001.patch, HADOOP-14890-branch-2-002.patch > > > the AWS SDK in Hadoop 3.0.-beta-1 prints a warning whenever you call abort() > on a stream, which is what we need to do whenever doing long-distance seeks > in a large file opened with fadvise=normal > {code} > 2017-09-20 17:51:50,459 [ScalaTest-main-running-S3ASeekReadSuite] INFO > s3.S3ASeekReadSuite (Logging.scala:logInfo(54)) - > 2017-09-20 17:51:50,460 [ScalaTest-main-running-S3ASeekReadSuite] INFO > s3.S3ASeekReadSuite (Logging.scala:logInfo(54)) - Starting read() [pos = > 45603305] > 2017-09-20 17:51:50,461 [ScalaTest-main-running-S3ASeekReadSuite] WARN > internal.S3AbortableInputStream (S3AbortableInputStream.java:close(163)) - > Not all bytes were read from the S3ObjectInputStream, aborting HTTP > connection. This is likely an error and may result in sub-optimal behavior. > Request only the bytes you need via a ranged GET or drain the input stream > after use. > 2017-09-20 17:51:51,263 [ScalaTest-main-running-S3ASeekReadSuite] INFO > s3.S3ASeekReadSuite (Logging.scala:logInfo(54)) - Duration of read() [pos = > 45603305] = 803,650,637 nS > {code} > This goes away if we upgrade to the latest SDK, at least for the > non-localdynamo bits -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14890) Move up to AWS SDK 1.11.199
[ https://issues.apache.org/jira/browse/HADOOP-14890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14890: Attachment: HADOOP-14890-branch-2-002.patch Branch-2 patch 002 This is patch 001 + a cherry pick of a change of {{org.apache.hadoop.fs.s3a.ITestS3ACredentialsInURLtestInvalidCredentialsFail()}} from trunk, otherwise the auth failure happens outside the try/catch block. All other tests work, s3 ireland > Move up to AWS SDK 1.11.199 > --- > > Key: HADOOP-14890 > URL: https://issues.apache.org/jira/browse/HADOOP-14890 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Fix For: 3.0.0 > > Attachments: HADOOP-14890-001.patch, HADOOP-14890-branch-2-002.patch > > > the AWS SDK in Hadoop 3.0.-beta-1 prints a warning whenever you call abort() > on a stream, which is what we need to do whenever doing long-distance seeks > in a large file opened with fadvise=normal > {code} > 2017-09-20 17:51:50,459 [ScalaTest-main-running-S3ASeekReadSuite] INFO > s3.S3ASeekReadSuite (Logging.scala:logInfo(54)) - > 2017-09-20 17:51:50,460 [ScalaTest-main-running-S3ASeekReadSuite] INFO > s3.S3ASeekReadSuite (Logging.scala:logInfo(54)) - Starting read() [pos = > 45603305] > 2017-09-20 17:51:50,461 [ScalaTest-main-running-S3ASeekReadSuite] WARN > internal.S3AbortableInputStream (S3AbortableInputStream.java:close(163)) - > Not all bytes were read from the S3ObjectInputStream, aborting HTTP > connection. This is likely an error and may result in sub-optimal behavior. > Request only the bytes you need via a ranged GET or drain the input stream > after use. > 2017-09-20 17:51:51,263 [ScalaTest-main-running-S3ASeekReadSuite] INFO > s3.S3ASeekReadSuite (Logging.scala:logInfo(54)) - Duration of read() [pos = > 45603305] = 803,650,637 nS > {code} > This goes away if we upgrade to the latest SDK, at least for the > non-localdynamo bits -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-14890) Move up to AWS SDK 1.11.199
[ https://issues.apache.org/jira/browse/HADOOP-14890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reopened HADOOP-14890: - reopening to backport to branch-2; about to submit patch > Move up to AWS SDK 1.11.199 > --- > > Key: HADOOP-14890 > URL: https://issues.apache.org/jira/browse/HADOOP-14890 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Fix For: 3.0.0 > > Attachments: HADOOP-14890-001.patch > > > the AWS SDK in Hadoop 3.0.-beta-1 prints a warning whenever you call abort() > on a stream, which is what we need to do whenever doing long-distance seeks > in a large file opened with fadvise=normal > {code} > 2017-09-20 17:51:50,459 [ScalaTest-main-running-S3ASeekReadSuite] INFO > s3.S3ASeekReadSuite (Logging.scala:logInfo(54)) - > 2017-09-20 17:51:50,460 [ScalaTest-main-running-S3ASeekReadSuite] INFO > s3.S3ASeekReadSuite (Logging.scala:logInfo(54)) - Starting read() [pos = > 45603305] > 2017-09-20 17:51:50,461 [ScalaTest-main-running-S3ASeekReadSuite] WARN > internal.S3AbortableInputStream (S3AbortableInputStream.java:close(163)) - > Not all bytes were read from the S3ObjectInputStream, aborting HTTP > connection. This is likely an error and may result in sub-optimal behavior. > Request only the bytes you need via a ranged GET or drain the input stream > after use. > 2017-09-20 17:51:51,263 [ScalaTest-main-running-S3ASeekReadSuite] INFO > s3.S3ASeekReadSuite (Logging.scala:logInfo(54)) - Duration of read() [pos = > 45603305] = 803,650,637 nS > {code} > This goes away if we upgrade to the latest SDK, at least for the > non-localdynamo bits -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14881) LoadGenerator should use Time.monotonicNow() to measure durations
[ https://issues.apache.org/jira/browse/HADOOP-14881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181138#comment-16181138 ] Bharat Viswanadham commented on HADOOP-14881: - Thank You [~jlowe] for reviewing and committing the patch. > LoadGenerator should use Time.monotonicNow() to measure durations > - > > Key: HADOOP-14881 > URL: https://issues.apache.org/jira/browse/HADOOP-14881 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Chetna Chaudhari >Assignee: Bharat Viswanadham > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: HADOOP-14881.01.patch, HADOOP-14881.02.patch, > HADOOP-14881.03.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14907) Memory leak in FileSystem cache
[ https://issues.apache.org/jira/browse/HADOOP-14907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181120#comment-16181120 ] Thomas Graves commented on HADOOP-14907: Can you give more details on where the heap dump is from? It looks like you are running Spark. Are you using the --keytab option? > Memory leak in FileSystem cache > --- > > Key: HADOOP-14907 > URL: https://issues.apache.org/jira/browse/HADOOP-14907 > Project: Hadoop Common > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.7.4 >Reporter: cen yuhai > Attachments: screenshot-1.png, screenshot-2.png > > > There is a memory leak in FileSystem cache. It will take a lot of memory.I > think the root cause is that the equals function in class Key is not right. > You can see in the screenshot-1.png, the same user etl is in different key... > And also FileSystem cache should be a LRU cache -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14907) Memory leak in FileSystem cache
[ https://issues.apache.org/jira/browse/HADOOP-14907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181044#comment-16181044 ] Daryn Sharp commented on HADOOP-14907: -- You have a leak, but the screenshot isn't showing the leak. The highlighted strings are both in the same UGI instance. Screenshot-1 shows a {{HashMap.Node}} from the {{FileSystem.Cache}}. The {{HashMap.Node.key}} field above it is a {{FileSystem.Cache.Key}} which references a {{UserGroupInformation}}. The {{HashMap.Node.value}} field shown is a {{DistributedFileSystem}} instance, which references the same {{FileSystem.Cache.Key}}. You can see from the hashcodes that the ugi is identical. The problem is you have ~20k {{Subject}} instances. Are you repeatedly invoking {{UserGroupInformation.createRemoteUser}}? > Memory leak in FileSystem cache > --- > > Key: HADOOP-14907 > URL: https://issues.apache.org/jira/browse/HADOOP-14907 > Project: Hadoop Common > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.7.4 >Reporter: cen yuhai > Attachments: screenshot-1.png, screenshot-2.png > > > There is a memory leak in FileSystem cache. It will take a lot of memory.I > think the root cause is that the equals function in class Key is not right. > You can see in the screenshot-1.png, the same user etl is in different key... > And also FileSystem cache should be a LRU cache -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14899) Restrict Access to setPermission operation when authorization is enabled in WASB
[ https://issues.apache.org/jira/browse/HADOOP-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181002#comment-16181002 ] Steve Loughran commented on HADOOP-14899: - Also: which Azure endpoint did you run the current test suites against; did everything pass. Yetus can't run the full test suite, so submitters are required to do so. It's nice and fast on trunk > Restrict Access to setPermission operation when authorization is enabled in > WASB > > > Key: HADOOP-14899 > URL: https://issues.apache.org/jira/browse/HADOOP-14899 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Kannapiran Srinivasan >Assignee: Kannapiran Srinivasan > Labels: fs, secure, wasb > Attachments: HADOOP-14899-001.patch, HADOOP-14899-002.patch > > > In case of authorization enabled Wasb clusters, we need to restrict setting > permissions on files or folders to owner or list of privileged users. > Currently in the WASB implementation even when authorization is enabled there > is no check happens while doing setPermission call. In this JIRA we would > like to add the check on the setPermission call in NativeAzureFileSystem > implementation so that only owner or the privileged list of users or daemon > users can change the permissions of files/folders -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14899) Restrict Access to setPermission operation when authorization is enabled in WASB
[ https://issues.apache.org/jira/browse/HADOOP-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180997#comment-16180997 ] Steve Loughran commented on HADOOP-14899: - Privilege is one of those words I can never spell right, so I had to check its spelling before reviewing this. h3. {{NativeAzureFileSystem}} * L698: you can't change the name of properties which users may have been using. If you do want to move to a new property, you need to retain the old one & add it to the list of deprecated properties through {{Configuration.addDeprecations}}. Given that {{NativeAzureFileSystem}} is tagged @Public, @Stable, you'll have to do the same there, with a new constant, @deprecate the previous one. Or just leave the name of the option alone. That's simpler, works with existing test and docs. * L2916: multiline javadocs need to start on the second line of the comment; all javadocs must have a "." at the end for the javadoc compiler to be happy. * L2980: if that line is > 80 chars, you'll need to split it. Yes, even if it existed before: this is our chance to cleanup. * L7971. The original code used actualUser, falling back to getCurrentuser when actualUser == null. The new chane appears to only go off getCurrentUser. This is a major change and the bit that worries me the most. why the change? Bear in mind nobody ever fully understands UGI internals, so I'm not sure it's wrong, just need to understand why the change, what the implications are. Are you confident that getCurrentUser never returns null, or does that need to be handled too? (FWIW, I don't see that it can return null from my quick look at the getCurrentUser-> getLoginUser() -> loginUserFromSubject sequence. * L3055. Better to convert the array to a list early on. And, as the conf is unlikely to change during the life of the client, do it on FS init & cache it. I think you could consider making the allowed user logic something out of the class, so that you can test it in more easily. But, given you've got those tests already written h3. Tests. tests look OK, though the doAs() code complicates reading. Moving to pure-Java 8 will fix that in future. But: its pretty much the same codepath followed: create rule, create mock user, create test path, attempt to manipulate permissions, check outcome (positive vs negative). Which makes me think: you could factor out all the tests into one or two methods with common behaviour * test lists to include, [], [""], ["*"], ["user", "*"]; ["","user"], ["*", "*"]. I think you've got most of those covered. > Restrict Access to setPermission operation when authorization is enabled in > WASB > > > Key: HADOOP-14899 > URL: https://issues.apache.org/jira/browse/HADOOP-14899 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Kannapiran Srinivasan >Assignee: Kannapiran Srinivasan > Labels: fs, secure, wasb > Attachments: HADOOP-14899-001.patch, HADOOP-14899-002.patch > > > In case of authorization enabled Wasb clusters, we need to restrict setting > permissions on files or folders to owner or list of privileged users. > Currently in the WASB implementation even when authorization is enabled there > is no check happens while doing setPermission call. In this JIRA we would > like to add the check on the setPermission call in NativeAzureFileSystem > implementation so that only owner or the privileged list of users or daemon > users can change the permissions of files/folders -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13917) Ensure nightly builds run the integration tests for the shaded client
[ https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180976#comment-16180976 ] Hadoop QA commented on HADOOP-13917: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 24m 40s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 55s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s{color} | {color:green} hadoop-minicluster in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 9s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-13917 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12887020/HADOOP-13917.WIP.0.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux bc6f676e3ced 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e9b790d | | Default Java | 1.8.0_144 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13378/testReport/ | | modules | C: hadoop-minicluster U: hadoop-minicluster | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13378/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ensure nightly builds run the integration tests for the shaded client > - > > Key: HADOOP-13917 > URL: https://issues.apache.org/jira/browse/HADOOP-13917 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, test >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-13917.WIP.0.patch > > > Either QBT or a different jenkins job should run our
[jira] [Updated] (HADOOP-14907) Memory leak in FileSystem cache
[ https://issues.apache.org/jira/browse/HADOOP-14907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] cen yuhai updated HADOOP-14907: --- Description: There is a memory leak in FileSystem cache. It will take a lot of memory.I think the root cause is that the equals function in class Key is not right. You can see in the screenshot-1.png, the same user etl is in different key... And also FileSystem cache should be a LRU cache was: There is a memory leak in FileSystem cache. It will take a lot of memory.I think the root cause is that the equals function in class Key is not right. You can see in the screenshot-1.png, the same user etl is in different key... And also FileSystem cache should be a > Memory leak in FileSystem cache > --- > > Key: HADOOP-14907 > URL: https://issues.apache.org/jira/browse/HADOOP-14907 > Project: Hadoop Common > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.7.4 >Reporter: cen yuhai > Attachments: screenshot-1.png, screenshot-2.png > > > There is a memory leak in FileSystem cache. It will take a lot of memory.I > think the root cause is that the equals function in class Key is not right. > You can see in the screenshot-1.png, the same user etl is in different key... > And also FileSystem cache should be a LRU cache -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14907) Memory leak in FileSystem cache
[ https://issues.apache.org/jira/browse/HADOOP-14907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] cen yuhai updated HADOOP-14907: --- Description: There is a memory leak in FileSystem cache. It will take a lot of memory.I think the root cause is that the equals function in class Key is not right. You can see in the screenshot-1.png, the same user etl is in different key... And also FileSystem cache should be a was: There is a memory leak in FileSystem cache. It will take a lot of memory. > Memory leak in FileSystem cache > --- > > Key: HADOOP-14907 > URL: https://issues.apache.org/jira/browse/HADOOP-14907 > Project: Hadoop Common > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.7.4 >Reporter: cen yuhai > Attachments: screenshot-1.png, screenshot-2.png > > > There is a memory leak in FileSystem cache. It will take a lot of memory.I > think the root cause is that the equals function in class Key is not right. > You can see in the screenshot-1.png, the same user etl is in different key... > And also FileSystem cache should be a -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14907) Memory leak in FileSystem cache
[ https://issues.apache.org/jira/browse/HADOOP-14907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] cen yuhai updated HADOOP-14907: --- Attachment: screenshot-1.png > Memory leak in FileSystem cache > --- > > Key: HADOOP-14907 > URL: https://issues.apache.org/jira/browse/HADOOP-14907 > Project: Hadoop Common > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.7.4 >Reporter: cen yuhai > Attachments: screenshot-1.png, screenshot-2.png > > > There is a memory leak in FileSystem cache. It will take a lot of memory. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14907) Memory leak in FileSystem cache
[ https://issues.apache.org/jira/browse/HADOOP-14907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] cen yuhai updated HADOOP-14907: --- Attachment: screenshot-2.png > Memory leak in FileSystem cache > --- > > Key: HADOOP-14907 > URL: https://issues.apache.org/jira/browse/HADOOP-14907 > Project: Hadoop Common > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.7.4 >Reporter: cen yuhai > Attachments: screenshot-1.png, screenshot-2.png > > > There is a memory leak in FileSystem cache. It will take a lot of memory. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14907) Memory leak in FileSystem cache
cen yuhai created HADOOP-14907: -- Summary: Memory leak in FileSystem cache Key: HADOOP-14907 URL: https://issues.apache.org/jira/browse/HADOOP-14907 Project: Hadoop Common Issue Type: Bug Components: hdfs-client Affects Versions: 2.7.4 Reporter: cen yuhai There is a memory leak in FileSystem cache. It will take a lot of memory. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14891) Remove
[ https://issues.apache.org/jira/browse/HADOOP-14891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated HADOOP-14891: Summary: Remove (was: Guava 21.0+ libraries not compatible with user jobs) > Remove > --- > > Key: HADOOP-14891 > URL: https://issues.apache.org/jira/browse/HADOOP-14891 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0, 2.8.1 >Reporter: Jonathan Eagles >Assignee: Jonathan Eagles > Attachments: HADOOP-14891.001-branch-2.patch > > > Use provided a guava 23.0 jar as part of the job submission. > {code} > 2017-09-20 16:10:42,897 [INFO] [main] |service.AbstractService|: Service > org.apache.tez.dag.app.DAGAppMaster failed in state STARTED; cause: > org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: > com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; > org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: > com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; > at > org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59) > at > org.apache.tez.dag.app.DAGAppMaster.startServices(DAGAppMaster.java:1989) > at > org.apache.tez.dag.app.DAGAppMaster.serviceStart(DAGAppMaster.java:2056) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2707) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1936) > at > org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2703) > at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2508) > Caused by: java.lang.NoSuchMethodError: > com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; > at > org.apache.hadoop.metrics2.lib.MetricsRegistry.toString(MetricsRegistry.java:419) > at java.lang.String.valueOf(String.java:2994) > at java.lang.StringBuilder.append(StringBuilder.java:131) > at org.apache.hadoop.ipc.metrics.RpcMetrics.(RpcMetrics.java:74) > at org.apache.hadoop.ipc.metrics.RpcMetrics.create(RpcMetrics.java:80) > at org.apache.hadoop.ipc.Server.(Server.java:2658) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810) > at > org.apache.tez.dag.api.client.DAGClientServer.createServer(DAGClientServer.java:134) > at > org.apache.tez.dag.api.client.DAGClientServer.serviceStart(DAGClientServer.java:82) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at > org.apache.tez.dag.app.DAGAppMaster$ServiceWithDependency.start(DAGAppMaster.java:1909) > at > org.apache.tez.dag.app.DAGAppMaster$ServiceThread.run(DAGAppMaster.java:1930) > 2017-09-20 16:10:42,898 [ERROR] [main] |rm.TaskSchedulerManager|: Failed to > do a clean initiateStop for Scheduler: [0:TezYarn] > {code} > Metrics2 has been relying on deprecated toStringHelper for some time now > which was finally removed in guava 21.0. Removing the dependency on this > method will free up the user to supplying their own guava jar again. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14838) backport S3guard to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14838: Status: Patch Available (was: Open) > backport S3guard to branch-2 > > > Key: HADOOP-14838 > URL: https://issues.apache.org/jira/browse/HADOOP-14838 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14838-branch-2-001.patch, > HADOOP-14838-branch-2-002.patch, HADOOP-14838-branch-2-003.patch, > HADOOP-14838-branch-2-004.patch > > > Backport S3Guard to branch-2 > this consists of > * classpath updates (AWS SDK, ...) > * hadoop bin classpath and command setup > * java 7 compatibility > * testing > The last patch of HADOOP-13998 brought the java code down to java 7 & has > already been tested/merged with branch-2; all that's left is the packaging, > bin/hadoop and review -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14838) backport S3guard to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14838: Attachment: HADOOP-14838-branch-2-004.patch Patch 004; stripping down the POM as we don't need the dependency set up for the branch-3 shell. Testing: everything, local and ddb. Also tested: cli, which works (without the HADOOP-14220 patch in though, which will follow) Once I've got yetus happy I'm going to commit this as a backport of something which is in trunk > backport S3guard to branch-2 > > > Key: HADOOP-14838 > URL: https://issues.apache.org/jira/browse/HADOOP-14838 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14838-branch-2-001.patch, > HADOOP-14838-branch-2-002.patch, HADOOP-14838-branch-2-003.patch, > HADOOP-14838-branch-2-004.patch > > > Backport S3Guard to branch-2 > this consists of > * classpath updates (AWS SDK, ...) > * hadoop bin classpath and command setup > * java 7 compatibility > * testing > The last patch of HADOOP-13998 brought the java code down to java 7 & has > already been tested/merged with branch-2; all that's left is the packaging, > bin/hadoop and review -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14838) backport S3guard to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14838: Status: Open (was: Patch Available) > backport S3guard to branch-2 > > > Key: HADOOP-14838 > URL: https://issues.apache.org/jira/browse/HADOOP-14838 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14838-branch-2-001.patch, > HADOOP-14838-branch-2-002.patch, HADOOP-14838-branch-2-003.patch > > > Backport S3Guard to branch-2 > this consists of > * classpath updates (AWS SDK, ...) > * hadoop bin classpath and command setup > * java 7 compatibility > * testing > The last patch of HADOOP-13998 brought the java code down to java 7 & has > already been tested/merged with branch-2; all that's left is the packaging, > bin/hadoop and review -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13917) Ensure nightly builds run the integration tests for the shaded client
[ https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180936#comment-16180936 ] Sean Busbey commented on HADOOP-13917: -- submitted new precommit and qbt runs now that the addendum for YETUS-543 has landed. > Ensure nightly builds run the integration tests for the shaded client > - > > Key: HADOOP-13917 > URL: https://issues.apache.org/jira/browse/HADOOP-13917 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, test >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-13917.WIP.0.patch > > > Either QBT or a different jenkins job should run our integration tests, > specifically the ones added for the shaded client. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14891) Remove references to Guava Objects.toStringHelper
[ https://issues.apache.org/jira/browse/HADOOP-14891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated HADOOP-14891: Summary: Remove references to Guava Objects.toStringHelper (was: Remove ) > Remove references to Guava Objects.toStringHelper > - > > Key: HADOOP-14891 > URL: https://issues.apache.org/jira/browse/HADOOP-14891 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0, 2.8.1 >Reporter: Jonathan Eagles >Assignee: Jonathan Eagles > Attachments: HADOOP-14891.001-branch-2.patch > > > Use provided a guava 23.0 jar as part of the job submission. > {code} > 2017-09-20 16:10:42,897 [INFO] [main] |service.AbstractService|: Service > org.apache.tez.dag.app.DAGAppMaster failed in state STARTED; cause: > org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: > com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; > org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: > com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; > at > org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59) > at > org.apache.tez.dag.app.DAGAppMaster.startServices(DAGAppMaster.java:1989) > at > org.apache.tez.dag.app.DAGAppMaster.serviceStart(DAGAppMaster.java:2056) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2707) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1936) > at > org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2703) > at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2508) > Caused by: java.lang.NoSuchMethodError: > com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; > at > org.apache.hadoop.metrics2.lib.MetricsRegistry.toString(MetricsRegistry.java:419) > at java.lang.String.valueOf(String.java:2994) > at java.lang.StringBuilder.append(StringBuilder.java:131) > at org.apache.hadoop.ipc.metrics.RpcMetrics.(RpcMetrics.java:74) > at org.apache.hadoop.ipc.metrics.RpcMetrics.create(RpcMetrics.java:80) > at org.apache.hadoop.ipc.Server.(Server.java:2658) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810) > at > org.apache.tez.dag.api.client.DAGClientServer.createServer(DAGClientServer.java:134) > at > org.apache.tez.dag.api.client.DAGClientServer.serviceStart(DAGClientServer.java:82) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at > org.apache.tez.dag.app.DAGAppMaster$ServiceWithDependency.start(DAGAppMaster.java:1909) > at > org.apache.tez.dag.app.DAGAppMaster$ServiceThread.run(DAGAppMaster.java:1930) > 2017-09-20 16:10:42,898 [ERROR] [main] |rm.TaskSchedulerManager|: Failed to > do a clean initiateStop for Scheduler: [0:TezYarn] > {code} > Metrics2 has been relying on deprecated toStringHelper for some time now > which was finally removed in guava 21.0. Removing the dependency on this > method will free up the user to supplying their own guava jar again. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14891) Remove references to Guava Objects.toStringHelper
[ https://issues.apache.org/jira/browse/HADOOP-14891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated HADOOP-14891: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.3 2.9.0 Status: Resolved (was: Patch Available) Thanks to Jonathan for the contribution and to Akira for additional review! I committed this to branch-2 and branch-2.8. > Remove references to Guava Objects.toStringHelper > - > > Key: HADOOP-14891 > URL: https://issues.apache.org/jira/browse/HADOOP-14891 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0, 2.8.1 >Reporter: Jonathan Eagles >Assignee: Jonathan Eagles > Fix For: 2.9.0, 2.8.3 > > Attachments: HADOOP-14891.001-branch-2.patch > > > Use provided a guava 23.0 jar as part of the job submission. > {code} > 2017-09-20 16:10:42,897 [INFO] [main] |service.AbstractService|: Service > org.apache.tez.dag.app.DAGAppMaster failed in state STARTED; cause: > org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: > com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; > org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: > com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; > at > org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59) > at > org.apache.tez.dag.app.DAGAppMaster.startServices(DAGAppMaster.java:1989) > at > org.apache.tez.dag.app.DAGAppMaster.serviceStart(DAGAppMaster.java:2056) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2707) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1936) > at > org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2703) > at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2508) > Caused by: java.lang.NoSuchMethodError: > com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; > at > org.apache.hadoop.metrics2.lib.MetricsRegistry.toString(MetricsRegistry.java:419) > at java.lang.String.valueOf(String.java:2994) > at java.lang.StringBuilder.append(StringBuilder.java:131) > at org.apache.hadoop.ipc.metrics.RpcMetrics.(RpcMetrics.java:74) > at org.apache.hadoop.ipc.metrics.RpcMetrics.create(RpcMetrics.java:80) > at org.apache.hadoop.ipc.Server.(Server.java:2658) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810) > at > org.apache.tez.dag.api.client.DAGClientServer.createServer(DAGClientServer.java:134) > at > org.apache.tez.dag.api.client.DAGClientServer.serviceStart(DAGClientServer.java:82) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at > org.apache.tez.dag.app.DAGAppMaster$ServiceWithDependency.start(DAGAppMaster.java:1909) > at > org.apache.tez.dag.app.DAGAppMaster$ServiceThread.run(DAGAppMaster.java:1930) > 2017-09-20 16:10:42,898 [ERROR] [main] |rm.TaskSchedulerManager|: Failed to > do a clean initiateStop for Scheduler: [0:TezYarn] > {code} > Metrics2 has been relying on deprecated toStringHelper for some time now > which was finally removed in guava 21.0. Removing the dependency on this > method will free up the user to supplying their own guava jar again. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14891) Guava 21.0+ libraries not compatible with user jobs
[ https://issues.apache.org/jira/browse/HADOOP-14891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180907#comment-16180907 ] Jason Lowe commented on HADOOP-14891: - This is closely related to HADOOP-14382 which removed the MoreObjects.StringHelper from trunk. Unfortunately we can't just cherry-pick that fix for 2.9 and 2.8 since it leverages java.util.StringJoiner which is new in JDK8. > Guava 21.0+ libraries not compatible with user jobs > --- > > Key: HADOOP-14891 > URL: https://issues.apache.org/jira/browse/HADOOP-14891 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0, 2.8.1 >Reporter: Jonathan Eagles >Assignee: Jonathan Eagles > Attachments: HADOOP-14891.001-branch-2.patch > > > Use provided a guava 23.0 jar as part of the job submission. > {code} > 2017-09-20 16:10:42,897 [INFO] [main] |service.AbstractService|: Service > org.apache.tez.dag.app.DAGAppMaster failed in state STARTED; cause: > org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: > com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; > org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: > com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; > at > org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59) > at > org.apache.tez.dag.app.DAGAppMaster.startServices(DAGAppMaster.java:1989) > at > org.apache.tez.dag.app.DAGAppMaster.serviceStart(DAGAppMaster.java:2056) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2707) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1936) > at > org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2703) > at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2508) > Caused by: java.lang.NoSuchMethodError: > com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; > at > org.apache.hadoop.metrics2.lib.MetricsRegistry.toString(MetricsRegistry.java:419) > at java.lang.String.valueOf(String.java:2994) > at java.lang.StringBuilder.append(StringBuilder.java:131) > at org.apache.hadoop.ipc.metrics.RpcMetrics.(RpcMetrics.java:74) > at org.apache.hadoop.ipc.metrics.RpcMetrics.create(RpcMetrics.java:80) > at org.apache.hadoop.ipc.Server.(Server.java:2658) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810) > at > org.apache.tez.dag.api.client.DAGClientServer.createServer(DAGClientServer.java:134) > at > org.apache.tez.dag.api.client.DAGClientServer.serviceStart(DAGClientServer.java:82) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at > org.apache.tez.dag.app.DAGAppMaster$ServiceWithDependency.start(DAGAppMaster.java:1909) > at > org.apache.tez.dag.app.DAGAppMaster$ServiceThread.run(DAGAppMaster.java:1930) > 2017-09-20 16:10:42,898 [ERROR] [main] |rm.TaskSchedulerManager|: Failed to > do a clean initiateStop for Scheduler: [0:TezYarn] > {code} > Metrics2 has been relying on deprecated toStringHelper for some time now > which was finally removed in guava 21.0. Removing the dependency on this > method will free up the user to supplying their own guava jar again. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14891) Guava 21.0+ libraries not compatible with user jobs
[ https://issues.apache.org/jira/browse/HADOOP-14891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180902#comment-16180902 ] Jason Lowe commented on HADOOP-14891: - +1 lgtm as well. Committing this. > Guava 21.0+ libraries not compatible with user jobs > --- > > Key: HADOOP-14891 > URL: https://issues.apache.org/jira/browse/HADOOP-14891 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0, 2.8.1 >Reporter: Jonathan Eagles >Assignee: Jonathan Eagles > Attachments: HADOOP-14891.001-branch-2.patch > > > Use provided a guava 23.0 jar as part of the job submission. > {code} > 2017-09-20 16:10:42,897 [INFO] [main] |service.AbstractService|: Service > org.apache.tez.dag.app.DAGAppMaster failed in state STARTED; cause: > org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: > com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; > org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: > com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; > at > org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59) > at > org.apache.tez.dag.app.DAGAppMaster.startServices(DAGAppMaster.java:1989) > at > org.apache.tez.dag.app.DAGAppMaster.serviceStart(DAGAppMaster.java:2056) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2707) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1936) > at > org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2703) > at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2508) > Caused by: java.lang.NoSuchMethodError: > com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; > at > org.apache.hadoop.metrics2.lib.MetricsRegistry.toString(MetricsRegistry.java:419) > at java.lang.String.valueOf(String.java:2994) > at java.lang.StringBuilder.append(StringBuilder.java:131) > at org.apache.hadoop.ipc.metrics.RpcMetrics.(RpcMetrics.java:74) > at org.apache.hadoop.ipc.metrics.RpcMetrics.create(RpcMetrics.java:80) > at org.apache.hadoop.ipc.Server.(Server.java:2658) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810) > at > org.apache.tez.dag.api.client.DAGClientServer.createServer(DAGClientServer.java:134) > at > org.apache.tez.dag.api.client.DAGClientServer.serviceStart(DAGClientServer.java:82) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at > org.apache.tez.dag.app.DAGAppMaster$ServiceWithDependency.start(DAGAppMaster.java:1909) > at > org.apache.tez.dag.app.DAGAppMaster$ServiceThread.run(DAGAppMaster.java:1930) > 2017-09-20 16:10:42,898 [ERROR] [main] |rm.TaskSchedulerManager|: Failed to > do a clean initiateStop for Scheduler: [0:TezYarn] > {code} > Metrics2 has been relying on deprecated toStringHelper for some time now > which was finally removed in guava 21.0. Removing the dependency on this > method will free up the user to supplying their own guava jar again. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14899) Restrict Access to setPermission operation when authorization is enabled in WASB
[ https://issues.apache.org/jira/browse/HADOOP-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180747#comment-16180747 ] Kannapiran Srinivasan commented on HADOOP-14899: [~ste...@apache.org] : Can you please review this patch > Restrict Access to setPermission operation when authorization is enabled in > WASB > > > Key: HADOOP-14899 > URL: https://issues.apache.org/jira/browse/HADOOP-14899 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Kannapiran Srinivasan >Assignee: Kannapiran Srinivasan > Labels: fs, secure, wasb > Attachments: HADOOP-14899-001.patch, HADOOP-14899-002.patch > > > In case of authorization enabled Wasb clusters, we need to restrict setting > permissions on files or folders to owner or list of privileged users. > Currently in the WASB implementation even when authorization is enabled there > is no check happens while doing setPermission call. In this JIRA we would > like to add the check on the setPermission call in NativeAzureFileSystem > implementation so that only owner or the privileged list of users or daemon > users can change the permissions of files/folders -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14899) Restrict Access to setPermission operation when authorization is enabled in WASB
[ https://issues.apache.org/jira/browse/HADOOP-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180744#comment-16180744 ] Hadoop QA commented on HADOOP-14899: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 57s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 19m 47s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14899 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889064/HADOOP-14899-002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 003c56e18a9f 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e9b790d | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13377/testReport/ | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13377/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Restrict Access to setPermission operation when authorization is enabled in > WASB > > > Key: HADOOP-14899 > URL: https://issues.apache.org/jira/browse/HADOOP-14899 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Kannapiran Srinivasan >Assignee: Kannapiran Srinivasan > Labels: fs, secure, wasb > Attachments: HADOOP-14899-001.patch, HADOOP-14899-002.patch > > > In case of authorization enabled Wasb clusters, we need to restrict setting > permissions on files or
[jira] [Commented] (HADOOP-14531) Improve S3A error handling & reporting
[ https://issues.apache.org/jira/browse/HADOOP-14531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180730#comment-16180730 ] Steve Loughran commented on HADOOP-14531: - And a randomly picked key sequence "fsfd" maps to a bucket which appears to exist but has access disabled, raises Access Denied. That's a slightly different text message than before {code} bin/hadoop s3guard bucket-info s3a://fdsd 2017-09-26 13:57:56,458 INFO s3a.S3ALambda: doesBucketExist on fdsd: java.nio.file.AccessDeniedException: fdsd: doesBucketExist on fdsd: com.amazonaws.services.s3.model.AmazonS3Exception: All access to this object has been disabled (Service: Amazon S3; Status Code: 403; Error Code: AllAccessDisabled; Request ID: E6229D7F8134E64F; S3 Extended Request ID: 6SzVz2t4qa8J2Wxo/oc8yBuB13Mgrn9uMKnxVY0hsBd2kU/YdHzW1IaujpJdDXRDCQRX3f1RYn0=), S3 Extended Request ID: 6SzVz2t4qa8J2Wxo/oc8yBuB13Mgrn9uMKnxVY0hsBd2kU/YdHzW1IaujpJdDXRDCQRX3f1RYn0=:AllAccessDisabled 2017-09-26 13:57:56,459 WARN s3a.S3ALambda: doesBucketExist on fdsd failing after 1 attempts: java.nio.file.AccessDeniedException: fdsd: doesBucketExist on fdsd: com.amazonaws.services.s3.model.AmazonS3Exception: All access to this object has been disabled (Service: Amazon S3; Status Code: 403; Error Code: AllAccessDisabled; Request ID: E6229D7F8134E64F; S3 Extended Request ID: 6SzVz2t4qa8J2Wxo/oc8yBuB13Mgrn9uMKnxVY0hsBd2kU/YdHzW1IaujpJdDXRDCQRX3f1RYn0=), S3 Extended Request ID: 6SzVz2t4qa8J2Wxo/oc8yBuB13Mgrn9uMKnxVY0hsBd2kU/YdHzW1IaujpJdDXRDCQRX3f1RYn0=:AllAccessDisabled java.nio.file.AccessDeniedException: fdsd: doesBucketExist on fdsd: com.amazonaws.services.s3.model.AmazonS3Exception: All access to this object has been disabled (Service: Amazon S3; Status Code: 403; Error Code: AllAccessDisabled; Request ID: E6229D7F8134E64F; S3 Extended Request ID: 6SzVz2t4qa8J2Wxo/oc8yBuB13Mgrn9uMKnxVY0hsBd2kU/YdHzW1IaujpJdDXRDCQRX3f1RYn0=), S3 Extended Request ID: 6SzVz2t4qa8J2Wxo/oc8yBuB13Mgrn9uMKnxVY0hsBd2kU/YdHzW1IaujpJdDXRDCQRX3f1RYn0=:AllAccessDisabled at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:205) at org.apache.hadoop.fs.s3a.S3ALambda.once(S3ALambda.java:122) at org.apache.hadoop.fs.s3a.S3ALambda.lambda$retry$2(S3ALambda.java:233) at org.apache.hadoop.fs.s3a.S3ALambda.retryUntranslated(S3ALambda.java:288) at org.apache.hadoop.fs.s3a.S3ALambda.retry(S3ALambda.java:228) at org.apache.hadoop.fs.s3a.S3ALambda.retry(S3ALambda.java:203) at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:357) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:293) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3288) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3337) at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3311) at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:529) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$BucketInfo.run(S3GuardTool.java:997) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:309) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1218) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1227) Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: All access to this object has been disabled (Service: Amazon S3; Status Code: 403; Error Code: AllAccessDisabled; Request ID: E6229D7F8134E64F; S3 Extended Request ID: 6SzVz2t4qa8J2Wxo/oc8yBuB13Mgrn9uMKnxVY0hsBd2kU/YdHzW1IaujpJdDXRDCQRX3f1RYn0=), S3 Extended Request ID: 6SzVz2t4qa8J2Wxo/oc8yBuB13Mgrn9uMKnxVY0hsBd2kU/YdHzW1IaujpJdDXRDCQRX3f1RYn0= at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1303) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1055) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) at
[jira] [Commented] (HADOOP-14531) Improve S3A error handling & reporting
[ https://issues.apache.org/jira/browse/HADOOP-14531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180726#comment-16180726 ] Steve Loughran commented on HADOOP-14531: - Bucket exists, but doesn't grant access (FWIW, it's a bucket of mine since ~2010, just on a personal a/c}: Result {{AccessDeniedException}} {code} bin/hadoop s3guard bucket-info s3a://stevel 2017-09-26 14:00:33,944 INFO Configuration.deprecation: fs.s3a.server-side-encryption-key is deprecated. Instead, use fs.s3a.server-side-encryption.key Filesystem s3a://stevel 2017-09-26 14:00:34,132 INFO s3a.S3ALambda: getBucketLocation() on stevel: java.nio.file.AccessDeniedException: stevel: getBucketLocation() on stevel: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 3AF0ECF22A60DD1D; S3 Extended Request ID: 1ljfM4kRdqobBBDKMVbRqeguhfn4vTH3uPjhNiU0VR5+GdP8ArUB89Qp3XY5gjUajeBwo2YUJLk=), S3 Extended Request ID: 1ljfM4kRdqobBBDKMVbRqeguhfn4vTH3uPjhNiU0VR5+GdP8ArUB89Qp3XY5gjUajeBwo2YUJLk=:AccessDenied 2017-09-26 14:00:34,133 WARN s3a.S3ALambda: getBucketLocation() on stevel failing after 1 attempts: java.nio.file.AccessDeniedException: stevel: getBucketLocation() on stevel: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 3AF0ECF22A60DD1D; S3 Extended Request ID: 1ljfM4kRdqobBBDKMVbRqeguhfn4vTH3uPjhNiU0VR5+GdP8ArUB89Qp3XY5gjUajeBwo2YUJLk=), S3 Extended Request ID: 1ljfM4kRdqobBBDKMVbRqeguhfn4vTH3uPjhNiU0VR5+GdP8ArUB89Qp3XY5gjUajeBwo2YUJLk=:AccessDenied java.nio.file.AccessDeniedException: stevel: getBucketLocation() on stevel: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 3AF0ECF22A60DD1D; S3 Extended Request ID: 1ljfM4kRdqobBBDKMVbRqeguhfn4vTH3uPjhNiU0VR5+GdP8ArUB89Qp3XY5gjUajeBwo2YUJLk=), S3 Extended Request ID: 1ljfM4kRdqobBBDKMVbRqeguhfn4vTH3uPjhNiU0VR5+GdP8ArUB89Qp3XY5gjUajeBwo2YUJLk=:AccessDenied at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:205) at org.apache.hadoop.fs.s3a.S3ALambda.once(S3ALambda.java:122) at org.apache.hadoop.fs.s3a.S3ALambda.lambda$retry$2(S3ALambda.java:233) at org.apache.hadoop.fs.s3a.S3ALambda.retryUntranslated(S3ALambda.java:288) at org.apache.hadoop.fs.s3a.S3ALambda.retry(S3ALambda.java:228) at org.apache.hadoop.fs.s3a.S3ALambda.retry(S3ALambda.java:203) at org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:513) at org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:501) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$BucketInfo.run(S3GuardTool.java:1004) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:309) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1218) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1227) Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 3AF0ECF22A60DD1D; S3 Extended Request ID: 1ljfM4kRdqobBBDKMVbRqeguhfn4vTH3uPjhNiU0VR5+GdP8ArUB89Qp3XY5gjUajeBwo2YUJLk=), S3 Extended Request ID: 1ljfM4kRdqobBBDKMVbRqeguhfn4vTH3uPjhNiU0VR5+GdP8ArUB89Qp3XY5gjUajeBwo2YUJLk= at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1303) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1055) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4229) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4176) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4170) at com.amazonaws.services.s3.AmazonS3Client.getBucketLocation(AmazonS3Client.java:938) at
[jira] [Commented] (HADOOP-14531) Improve S3A error handling & reporting
[ https://issues.apache.org/jira/browse/HADOOP-14531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180719#comment-16180719 ] Steve Loughran commented on HADOOP-14531: - stack on an init with a bucket which doesn't exist, maps to FNFE {code} bin/hadoop s3guard bucket-info s3a://stevel45r5 java.io.FileNotFoundException: Bucket stevel45r5 does not exist at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:361) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:293) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3288) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3337) at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3311) at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:529) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$BucketInfo.run(S3GuardTool.java:997) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:309) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1218) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1227) 2017-09-26 13:58:40,024 INFO util.ExitUtil: Exiting with status -1: java.io.FileNotFoundException: Bucket stevel45r5 does not exist {code} > Improve S3A error handling & reporting > -- > > Key: HADOOP-14531 > URL: https://issues.apache.org/jira/browse/HADOOP-14531 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > > Improve S3a error handling and reporting > this includes > # looking at error codes and translating to more specific exceptions > # better retry logic where present > # adding retry logic where not present > # more diagnostics in exceptions > # docs > Overall goals > * things that can be retried and will go away are retried for a bit > * things that don't go away when retried failfast (302, no auth, unknown > host, connection refused) > * meaningful exceptions are built in translate exception > * diagnostics are included, where possible > * our troubleshooting docs are expanded with new failures we encounter > AWS S3 error codes: > http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14899) Restrict Access to setPermission operation when authorization is enabled in WASB
[ https://issues.apache.org/jira/browse/HADOOP-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180687#comment-16180687 ] Kannapiran Srinivasan edited comment on HADOOP-14899 at 9/26/17 12:42 PM: -- Following fixes are done in the patch [^HADOOP-14899-002.patch] * Updated setPermission & setOwner implementation to check only the current user * Updated the tests for setOwner with appropriate asserts * Fixed typo in the documentation was (Author: kansrini): Following fixes are done in this patch * Updated setPermission & setOwner implementation to check only the current user * Updated the tests for setOwner with appropriate asserts * Fixed typo in the documentation > Restrict Access to setPermission operation when authorization is enabled in > WASB > > > Key: HADOOP-14899 > URL: https://issues.apache.org/jira/browse/HADOOP-14899 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Kannapiran Srinivasan >Assignee: Kannapiran Srinivasan > Labels: fs, secure, wasb > Attachments: HADOOP-14899-001.patch, HADOOP-14899-002.patch > > > In case of authorization enabled Wasb clusters, we need to restrict setting > permissions on files or folders to owner or list of privileged users. > Currently in the WASB implementation even when authorization is enabled there > is no check happens while doing setPermission call. In this JIRA we would > like to add the check on the setPermission call in NativeAzureFileSystem > implementation so that only owner or the privileged list of users or daemon > users can change the permissions of files/folders -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14899) Restrict Access to setPermission operation when authorization is enabled in WASB
[ https://issues.apache.org/jira/browse/HADOOP-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kannapiran Srinivasan updated HADOOP-14899: --- Attachment: HADOOP-14899-002.patch Following fixes are done in this patch * Updated setPermission & setOwner implementation to check only the current user * Updated the tests for setOwner with appropriate asserts * Fixed typo in the documentation > Restrict Access to setPermission operation when authorization is enabled in > WASB > > > Key: HADOOP-14899 > URL: https://issues.apache.org/jira/browse/HADOOP-14899 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Kannapiran Srinivasan >Assignee: Kannapiran Srinivasan > Labels: fs, secure, wasb > Attachments: HADOOP-14899-001.patch, HADOOP-14899-002.patch > > > In case of authorization enabled Wasb clusters, we need to restrict setting > permissions on files or folders to owner or list of privileged users. > Currently in the WASB implementation even when authorization is enabled there > is no check happens while doing setPermission call. In this JIRA we would > like to add the check on the setPermission call in NativeAzureFileSystem > implementation so that only owner or the privileged list of users or daemon > users can change the permissions of files/folders -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14872) CryptoInputStream should implement unbuffer
[ https://issues.apache.org/jira/browse/HADOOP-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180677#comment-16180677 ] Steve Loughran commented on HADOOP-14872: - LGTM +1, with one little change requested before you commit: In {{testUnbuffer()}} use try-with-resources to close {{in}} stream even if an assert is raised. Thanks: this'll be a nice little feature > CryptoInputStream should implement unbuffer > --- > > Key: HADOOP-14872 > URL: https://issues.apache.org/jira/browse/HADOOP-14872 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-14872.001.patch, HADOOP-14872.002.patch, > HADOOP-14872.003.patch, HADOOP-14872.004.patch, HADOOP-14872.005.patch > > > Discovered in IMPALA-5909. > Opening an encrypted HDFS file returns a chain of wrapped input streams: > {noformat} > HdfsDataInputStream > CryptoInputStream > DFSInputStream > {noformat} > If an application such as Impala or HBase calls HdfsDataInputStream#unbuffer, > FSDataInputStream#unbuffer will be called: > {code:java} > try { > ((CanUnbuffer)in).unbuffer(); > } catch (ClassCastException e) { > throw new UnsupportedOperationException("this stream does not " + > "support unbuffering."); > } > {code} > If the {{in}} class does not implement CanUnbuffer, UOE will be thrown. If > the application is not careful, tons of UOEs will show up in logs. > In comparison, opening an non-encrypted HDFS file returns this chain: > {noformat} > HdfsDataInputStream > DFSInputStream > {noformat} > DFSInputStream implements CanUnbuffer. > It is good for CryptoInputStream to implement CanUnbuffer for 2 reasons: > * Release buffer, cache, or any other resource when instructed > * Able to call its wrapped DFSInputStream unbuffer > * Avoid the UOE described above. Applications may not handle the UOE very > well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests
[ https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180673#comment-16180673 ] Steve Loughran commented on HADOOP-14220: - thanks > Enhance S3GuardTool with bucket-info and set-capacity commands, tests > - > > Key: HADOOP-14220 > URL: https://issues.apache.org/jira/browse/HADOOP-14220 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14220-006.patch, HADOOP-14220-008.patch, > HADOOP-14220-009.patch, HADOOP-14220-010.patch, HADOOP-14220-012.patch, > HADOOP-14220-013.patch, HADOOP-14220-013.patch, HADOOP-14220-014.patch, > HADOOP-14220-015.patch, HADOOP-14220-016.patch, > HADOOP-14220-HADOOP-13345-001.patch, HADOOP-14220-HADOOP-13345-002.patch, > HADOOP-14220-HADOOP-13345-003.patch, HADOOP-14220-HADOOP-13345-004.patch, > HADOOP-14220-HADOOP-13345-005.patch > > > Add a diagnostics command to s3guard which does whatever we need to diagnose > problems for a specific (named) s3a url. This is something which can be > attached to bug reports as well as used by developers. > * Properties to log (with provenance attribute, which can track bucket > overrides: s3guard metastore setup, autocreate, capacity, > * table present/absent > * # of keys in DDB table for that bucket? > * any other stats? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14906) ITestAzureConcurrentOutOfBandIo failing: The MD5 value specified in the request did not match with the MD5 value calculated by the server
[ https://issues.apache.org/jira/browse/HADOOP-14906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14906: Summary: ITestAzureConcurrentOutOfBandIo failing: The MD5 value specified in the request did not match with the MD5 value calculated by the server (was: ITestAzureConcurrentOutOfBandIo failing with checksum errors on write) > ITestAzureConcurrentOutOfBandIo failing: The MD5 value specified in the > request did not match with the MD5 value calculated by the server > - > > Key: HADOOP-14906 > URL: https://issues.apache.org/jira/browse/HADOOP-14906 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 2.9.0, 3.1.0 > Environment: UK BT ASDL connection, 1.8.0_121-b13, azure storage > ireland >Reporter: Steve Loughran > > {{ITestAzureConcurrentOutOfBandIo}} is consistently raising an IOE with the > text "The MD5 value specified in the request did not match with the MD5 value > calculated by the server" -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14906) ITestAzureConcurrentOutOfBandIo failing with checksum errors on write
[ https://issues.apache.org/jira/browse/HADOOP-14906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180605#comment-16180605 ] Steve Loughran commented on HADOOP-14906: - [~Georgi]: thanks for looking at this. Although your patch was the last to go near the test that was failing, the fact that it has "gone away" since I moved to a different network location makes me thing it is network-infra-related, and that could be a sign of an underlying problem, maybe even common to all apps using the Azure storage SDK: we just got to find it first. It'd still be nice to know what's going on, or if there are improvements which can be done to reporting/recovery. Otherwise, I'll think about closing as cannot reproduce for now. Changing the title to make sure the error text is in it (for easier searching) > ITestAzureConcurrentOutOfBandIo failing with checksum errors on write > - > > Key: HADOOP-14906 > URL: https://issues.apache.org/jira/browse/HADOOP-14906 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 2.9.0, 3.1.0 > Environment: UK BT ASDL connection, 1.8.0_121-b13, azure storage > ireland >Reporter: Steve Loughran > > {{ITestAzureConcurrentOutOfBandIo}} is consistently raising an IOE with the > text "The MD5 value specified in the request did not match with the MD5 value > calculated by the server" -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14870) backport HADOOP-14553 parallel tests to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180604#comment-16180604 ] Steve Loughran commented on HADOOP-14870: - (& given it's just HADOOP-14553 with some bits taken out, can actually commit without review. I'm giving people a chance to play with first though) > backport HADOOP-14553 parallel tests to branch-2 > > > Key: HADOOP-14870 > URL: https://issues.apache.org/jira/browse/HADOOP-14870 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure, test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14870-branch-2-001.patch, > HADOOP-14870-branch-2-002.patch > > > Backport the HADOOP-14553 parallel test running from trunk to branch-2. > There's some complexity related to The FS Contract base test being JUnit4 in > branch -2, so its not a simple cherrypick. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14870) backport HADOOP-14553 parallel tests to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180603#comment-16180603 ] Steve Loughran commented on HADOOP-14870: - +back to normal network and HADOOP-14906 is gone; all tests are passing. I'm ready for others to play with this patch now > backport HADOOP-14553 parallel tests to branch-2 > > > Key: HADOOP-14870 > URL: https://issues.apache.org/jira/browse/HADOOP-14870 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure, test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14870-branch-2-001.patch, > HADOOP-14870-branch-2-002.patch > > > Backport the HADOOP-14553 parallel test running from trunk to branch-2. > There's some complexity related to The FS Contract base test being JUnit4 in > branch -2, so its not a simple cherrypick. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14906) ITestAzureConcurrentOutOfBandIo failing with checksum errors on write
[ https://issues.apache.org/jira/browse/HADOOP-14906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180584#comment-16180584 ] Steve Loughran commented on HADOOP-14906: - +happened in both parallel & serial test runs, so it wasn't the case that the problem was triggered by the parallel test runner of HADOOP-14553 > ITestAzureConcurrentOutOfBandIo failing with checksum errors on write > - > > Key: HADOOP-14906 > URL: https://issues.apache.org/jira/browse/HADOOP-14906 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 2.9.0, 3.1.0 > Environment: UK BT ASDL connection, 1.8.0_121-b13, azure storage > ireland >Reporter: Steve Loughran > > {{ITestAzureConcurrentOutOfBandIo}} is consistently raising an IOE with the > text "The MD5 value specified in the request did not match with the MD5 value > calculated by the server" -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14906) ITestAzureConcurrentOutOfBandIo failing with checksum errors on write
[ https://issues.apache.org/jira/browse/HADOOP-14906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180577#comment-16180577 ] Steve Loughran commented on HADOOP-14906: - Doesn't occur at other locations. The one with the problem had * BT ADSL * BT wifi base station which never lets you change DNS servers One withou * BT Fibre-to-the-Cabinet * DD-WRT base station bonded to Google DNS Same laptop. It's possible that these tests are failing because they are correctly detecting corruption of in-flight data. * I'd only expect that on HTTP connections, not HTTPS, * unless it was a (transient) problem at Azure storage and/or the laptop. One thing to consider here is what the retry policy is doing. There is retry logic in the upload routine, but did it work? How can be we confident of this? > ITestAzureConcurrentOutOfBandIo failing with checksum errors on write > - > > Key: HADOOP-14906 > URL: https://issues.apache.org/jira/browse/HADOOP-14906 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 2.9.0, 3.1.0 > Environment: UK BT ASDL connection, 1.8.0_121-b13, azure storage > ireland >Reporter: Steve Loughran > > {{ITestAzureConcurrentOutOfBandIo}} is consistently raising an IOE with the > text "The MD5 value specified in the request did not match with the MD5 value > calculated by the server" -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14893) WritableRpcEngine should use Time.monotonicNow
[ https://issues.apache.org/jira/browse/HADOOP-14893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180421#comment-16180421 ] Hudson commented on HADOOP-14893: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12975 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12975/]) HADOOP-14893. WritableRpcEngine should use Time.monotonicNow. (aajisaka: rev d08b8c801a908b4242e7b21a54f3b1e4072f1eae) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/WritableRpcEngine.java > WritableRpcEngine should use Time.monotonicNow > -- > > Key: HADOOP-14893 > URL: https://issues.apache.org/jira/browse/HADOOP-14893 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Chetna Chaudhari >Assignee: Chetna Chaudhari >Priority: Minor > Fix For: 2.9.0, 3.0.0-beta1, 2.8.3, 3.1.0 > > Attachments: HADOOP-14893-2.patch, HADOOP-14893.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14891) Guava 21.0+ libraries not compatible with user jobs
[ https://issues.apache.org/jira/browse/HADOOP-14891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180419#comment-16180419 ] Akira Ajisaka commented on HADOOP-14891: LGTM, +1 > Guava 21.0+ libraries not compatible with user jobs > --- > > Key: HADOOP-14891 > URL: https://issues.apache.org/jira/browse/HADOOP-14891 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0, 2.8.1 >Reporter: Jonathan Eagles >Assignee: Jonathan Eagles > Attachments: HADOOP-14891.001-branch-2.patch > > > Use provided a guava 23.0 jar as part of the job submission. > {code} > 2017-09-20 16:10:42,897 [INFO] [main] |service.AbstractService|: Service > org.apache.tez.dag.app.DAGAppMaster failed in state STARTED; cause: > org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: > com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; > org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: > com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; > at > org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59) > at > org.apache.tez.dag.app.DAGAppMaster.startServices(DAGAppMaster.java:1989) > at > org.apache.tez.dag.app.DAGAppMaster.serviceStart(DAGAppMaster.java:2056) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2707) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1936) > at > org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2703) > at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2508) > Caused by: java.lang.NoSuchMethodError: > com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; > at > org.apache.hadoop.metrics2.lib.MetricsRegistry.toString(MetricsRegistry.java:419) > at java.lang.String.valueOf(String.java:2994) > at java.lang.StringBuilder.append(StringBuilder.java:131) > at org.apache.hadoop.ipc.metrics.RpcMetrics.(RpcMetrics.java:74) > at org.apache.hadoop.ipc.metrics.RpcMetrics.create(RpcMetrics.java:80) > at org.apache.hadoop.ipc.Server.(Server.java:2658) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810) > at > org.apache.tez.dag.api.client.DAGClientServer.createServer(DAGClientServer.java:134) > at > org.apache.tez.dag.api.client.DAGClientServer.serviceStart(DAGClientServer.java:82) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at > org.apache.tez.dag.app.DAGAppMaster$ServiceWithDependency.start(DAGAppMaster.java:1909) > at > org.apache.tez.dag.app.DAGAppMaster$ServiceThread.run(DAGAppMaster.java:1930) > 2017-09-20 16:10:42,898 [ERROR] [main] |rm.TaskSchedulerManager|: Failed to > do a clean initiateStop for Scheduler: [0:TezYarn] > {code} > Metrics2 has been relying on deprecated toStringHelper for some time now > which was finally removed in guava 21.0. Removing the dependency on this > method will free up the user to supplying their own guava jar again. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14893) WritableRpcEngine should use Time.monotonicNow
[ https://issues.apache.org/jira/browse/HADOOP-14893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-14893: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.0 2.8.3 3.0.0-beta1 2.9.0 Status: Resolved (was: Patch Available) Committed this to trunk, branch-3.0, branch-2, and branch-2.8. Thanks [~chetna] for the contribution. > WritableRpcEngine should use Time.monotonicNow > -- > > Key: HADOOP-14893 > URL: https://issues.apache.org/jira/browse/HADOOP-14893 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Chetna Chaudhari >Assignee: Chetna Chaudhari >Priority: Minor > Fix For: 2.9.0, 3.0.0-beta1, 2.8.3, 3.1.0 > > Attachments: HADOOP-14893-2.patch, HADOOP-14893.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14893) WritableRpcEngine should use Time.monotonicNow
[ https://issues.apache.org/jira/browse/HADOOP-14893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180405#comment-16180405 ] Akira Ajisaka commented on HADOOP-14893: +1 > WritableRpcEngine should use Time.monotonicNow > -- > > Key: HADOOP-14893 > URL: https://issues.apache.org/jira/browse/HADOOP-14893 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Chetna Chaudhari >Assignee: Chetna Chaudhari >Priority: Minor > Attachments: HADOOP-14893-2.patch, HADOOP-14893.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org