[jira] [Commented] (HADOOP-14661) S3A to support Requester Pays Buckets using
[ https://issues.apache.org/jira/browse/HADOOP-14661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159754#comment-16159754 ] Mandus Momberg commented on HADOOP-14661: - [~steve_l], I doubt I'll be able to hit the deadline for 3.0, my apologies. I have been extremely busy at work and have not had time to add the additional tests and clean up the code. > S3A to support Requester Pays Buckets using > --- > > Key: HADOOP-14661 > URL: https://issues.apache.org/jira/browse/HADOOP-14661 > Project: Hadoop Common > Issue Type: Sub-task > Components: common, util >Affects Versions: 3.0.0-alpha3 >Reporter: Mandus Momberg >Assignee: Mandus Momberg >Priority: Minor > Attachments: HADOOP-14661.patch > > Original Estimate: 2h > Remaining Estimate: 2h > > Amazon S3 has the ability to charge the requester for the cost of accessing > S3. This is called Requester Pays Buckets. > In order to access these buckets, each request needs to be signed with a > specific header. > http://docs.aws.amazon.com/AmazonS3/latest/dev/RequesterPaysBuckets.html -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14089) Shaded Hadoop client runtime includes non-shaded classes
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159750#comment-16159750 ] Sean Busbey commented on HADOOP-14089: -- IT fails for me all the way back to the original application of HADOOP-11804. I think that makes it unrelated to this change, so I'll open a different jira for it. > Shaded Hadoop client runtime includes non-shaded classes > > > Key: HADOOP-14089 > URL: https://issues.apache.org/jira/browse/HADOOP-14089 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: David Phillips >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-14089.WIP.0.patch, HADOOP-14089.WIP.1.patch > > > The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, > {{javax/ws}}, {{mozilla}}, etc. > An easy way to verify this is to look at the contents of the jar: > {code} > jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop' > {code} > For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS > {{javax.ws}}, it makes sense for those to be normal dependencies in the POM > -- they are standard, so version conflicts shouldn't be a problem. The JSR > 305 annotations can be {{true}} since they aren't needed > at runtime (this is what Guava does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14851) LambdaTestUtils.eventually() doesn't spin on Assertion failures
[ https://issues.apache.org/jira/browse/HADOOP-14851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159708#comment-16159708 ] Hudson commented on HADOOP-14851: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12829 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12829/]) HADOOP-14851 LambdaTestUtils.eventually() doesn't spin on Assertion (fabbri: rev 180e814b081d3707c95641171d649b547db41a04) * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/LambdaTestUtils.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestLambdaTestUtils.java > LambdaTestUtils.eventually() doesn't spin on Assertion failures > --- > > Key: HADOOP-14851 > URL: https://issues.apache.org/jira/browse/HADOOP-14851 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Steve Loughran > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14851-001.patch, HADOOP-14851-002.patch, > HADOOP-14851-003.patch > > > This is funny. The {{LsmbdaTestUtils.eventually()}} method, meant to spin > until a closure stops raising exceptions, doesn't catch {{Error}} and > subclasses, so doesn't fail on an {{Assert.assert()}} failure, which raises > an {{AssertionError}}. My bad :) > Example: > {code} > eventually(TIMEOUT, > () -> { > while (counter.incrementAndGet() < 5) { > assert false : "oops"; > } > }, > retryLogic); > {code} > Fix: catch Throwable, rethrow. Needs to add VirtualMachineError & subclasses > to the set of errors not to spin on (OOM, stack overflow, ...) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14851) LambdaTestUtils.eventually() doesn't spin on Assertion failures
[ https://issues.apache.org/jira/browse/HADOOP-14851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159703#comment-16159703 ] Aaron Fabbri edited comment on HADOOP-14851 at 9/9/17 3:29 AM: --- Committed to trunk after testing. Thanks for your contribution [~ste...@apache.org] (BTW, I wasn't sure if this should go on branch-2. What is our current guidelines for backporting s3a stuff to branch-2?) was (Author: fabbri): Committed to trunk after testing. Thanks for your contribution. (BTW, I wasn't sure if this should go on branch-2. What is our current guidelines for backporting s3a stuff to branch-2?) > LambdaTestUtils.eventually() doesn't spin on Assertion failures > --- > > Key: HADOOP-14851 > URL: https://issues.apache.org/jira/browse/HADOOP-14851 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Steve Loughran > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14851-001.patch, HADOOP-14851-002.patch, > HADOOP-14851-003.patch > > > This is funny. The {{LsmbdaTestUtils.eventually()}} method, meant to spin > until a closure stops raising exceptions, doesn't catch {{Error}} and > subclasses, so doesn't fail on an {{Assert.assert()}} failure, which raises > an {{AssertionError}}. My bad :) > Example: > {code} > eventually(TIMEOUT, > () -> { > while (counter.incrementAndGet() < 5) { > assert false : "oops"; > } > }, > retryLogic); > {code} > Fix: catch Throwable, rethrow. Needs to add VirtualMachineError & subclasses > to the set of errors not to spin on (OOM, stack overflow, ...) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14851) LambdaTestUtils.eventually() doesn't spin on Assertion failures
[ https://issues.apache.org/jira/browse/HADOOP-14851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri updated HADOOP-14851: -- Resolution: Fixed Fix Version/s: 3.0.0-beta1 Status: Resolved (was: Patch Available) Committed to trunk after testing. Thanks for your contribution. (BTW, I wasn't sure if this should go on branch-2. What is our current guidelines for backporting s3a stuff to branch-2?) > LambdaTestUtils.eventually() doesn't spin on Assertion failures > --- > > Key: HADOOP-14851 > URL: https://issues.apache.org/jira/browse/HADOOP-14851 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Steve Loughran > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14851-001.patch, HADOOP-14851-002.patch, > HADOOP-14851-003.patch > > > This is funny. The {{LsmbdaTestUtils.eventually()}} method, meant to spin > until a closure stops raising exceptions, doesn't catch {{Error}} and > subclasses, so doesn't fail on an {{Assert.assert()}} failure, which raises > an {{AssertionError}}. My bad :) > Example: > {code} > eventually(TIMEOUT, > () -> { > while (counter.incrementAndGet() < 5) { > assert false : "oops"; > } > }, > retryLogic); > {code} > Fix: catch Throwable, rethrow. Needs to add VirtualMachineError & subclasses > to the set of errors not to spin on (OOM, stack overflow, ...) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14804) correct wrong parameters format order in core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Hongfei reassigned HADOOP-14804: - Assignee: Chen Hongfei > correct wrong parameters format order in core-default.xml > - > > Key: HADOOP-14804 > URL: https://issues.apache.org/jira/browse/HADOOP-14804 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha3 >Reporter: Chen Hongfei >Assignee: Chen Hongfei >Priority: Trivial > Fix For: 3.0.0-alpha3 > > Attachments: HADOOP-14804.001.patch, HADOOP-14804.002.patch, > HADOOP-14804.003.patch > > > descriptions of "HTTP CORS" parameters is before the names: > >Comma separated list of headers that are allowed for web > services needing cross-origin (CORS) support. > hadoop.http.cross-origin.allowed-headers > X-Requested-With,Content-Type,Accept,Origin > > .. > but they should be following value as others. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14851) LambdaTestUtils.eventually() doesn't spin on Assertion failures
[ https://issues.apache.org/jira/browse/HADOOP-14851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159660#comment-16159660 ] Hadoop QA commented on HADOOP-14851: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 8s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 63m 34s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestKDiag | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14851 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886205/HADOOP-14851-003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c4de8ff97931 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3fddabc | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13218/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13218/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13218/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > LambdaTestUtils.eventually() doesn't spin on Assertion failures > --- > > Key: HADOOP-14851 > URL: https://issues.apache.org/jira/browse/HADOOP-14851 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Steve Loughran >
[jira] [Updated] (HADOOP-14851) LambdaTestUtils.eventually() doesn't spin on Assertion failures
[ https://issues.apache.org/jira/browse/HADOOP-14851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri updated HADOOP-14851: -- Attachment: HADOOP-14851-003.patch v3 patch: Added a period at the end of javadoc sentence to kill remaining checkstyle issue. > LambdaTestUtils.eventually() doesn't spin on Assertion failures > --- > > Key: HADOOP-14851 > URL: https://issues.apache.org/jira/browse/HADOOP-14851 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14851-001.patch, HADOOP-14851-002.patch, > HADOOP-14851-003.patch > > > This is funny. The {{LsmbdaTestUtils.eventually()}} method, meant to spin > until a closure stops raising exceptions, doesn't catch {{Error}} and > subclasses, so doesn't fail on an {{Assert.assert()}} failure, which raises > an {{AssertionError}}. My bad :) > Example: > {code} > eventually(TIMEOUT, > () -> { > while (counter.incrementAndGet() < 5) { > assert false : "oops"; > } > }, > retryLogic); > {code} > Fix: catch Throwable, rethrow. Needs to add VirtualMachineError & subclasses > to the set of errors not to spin on (OOM, stack overflow, ...) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14652) Update metrics-core version to 3.2.3
[ https://issues.apache.org/jira/browse/HADOOP-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159551#comment-16159551 ] Hadoop QA commented on HADOOP-14652: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 22m 1s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 5m 41s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 5s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 22s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}142m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.http.TestHttpServer | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14652 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886169/HADOOP-14652.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux dd3f3e37389c 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8edc605 | | Default Java | 1.8.0_144 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13215/artifact/patchprocess/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13215/testReport/ | | modules | C: hadoop-project hadoop-common-project/hadoop-kms hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-tools/hadoop-sls . U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13215/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Update metrics-core version to 3.2.3 > > > Key: HADOOP-14652 > URL: https://issues.apache.org/jira/browse/HADOOP-14652 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14652.001.patch,
[jira] [Commented] (HADOOP-14841) Let KMS Client retry 'No content to map' EOFExceptions
[ https://issues.apache.org/jira/browse/HADOOP-14841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159538#comment-16159538 ] Hadoop QA commented on HADOOP-14841: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 10s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 63m 54s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14841 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886182/HADOOP-14841.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux e20a19ad3fad 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3fddabc | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13216/testReport/ | | modules | C: hadoop-common-project/hadoop-kms U: hadoop-common-project/hadoop-kms | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13216/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Let KMS Client retry 'No content to map' EOFExceptions > -- > > Key: HADOOP-14841 > URL: https://issues.apache.org/jira/browse/HADOOP-14841 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Affects Versions: 2.6.0 >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-14841.01.patch, HADOOP-14841.02.patch > > > We have seen quite some occurrences when the KMS server is stressed, some of > the requests would end up getting a 500 return code, with this in the server > log: > {noformat} > 2017-08-31
[jira] [Commented] (HADOOP-14856) Fix AWS, Jetty, HBase, Ehcache entries for NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-14856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159519#comment-16159519 ] Hadoop QA commented on HADOOP-14856: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 1m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14856 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886194/HADOOP-14856.001.patch | | Optional Tests | asflicense | | uname | Linux ee13d49a1095 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3fddabc | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13217/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix AWS, Jetty, HBase, Ehcache entries for NOTICE.txt > - > > Key: HADOOP-14856 > URL: https://issues.apache.org/jira/browse/HADOOP-14856 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14856.001.patch > > > Some entries needed updating in NOTICE.txt. Found these while working on > HADOOP-14647. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14856) Fix AWS, Jetty, HBase, Ehcache entries for NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-14856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14856: Attachment: HADOOP-14856.001.patch * AWS libraries updated to 1.11.134 * Jetty libraries updated to 9.3.19 * HBase libraries updated to 1.2.6 * Ehcache entry is added > Fix AWS, Jetty, HBase, Ehcache entries for NOTICE.txt > - > > Key: HADOOP-14856 > URL: https://issues.apache.org/jira/browse/HADOOP-14856 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14856.001.patch > > > Some entries needed updating in NOTICE.txt. Found these while working on > HADOOP-14647. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14856) Fix AWS, Jetty, HBase, Ehcache entries for NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-14856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14856: Status: Patch Available (was: Open) > Fix AWS, Jetty, HBase, Ehcache entries for NOTICE.txt > - > > Key: HADOOP-14856 > URL: https://issues.apache.org/jira/browse/HADOOP-14856 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14856.001.patch > > > Some entries needed updating in NOTICE.txt. Found these while working on > HADOOP-14647. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14856) Fix AWS, Jetty, HBase, Ehcache entries for NOTICE.txt
Ray Chiang created HADOOP-14856: --- Summary: Fix AWS, Jetty, HBase, Ehcache entries for NOTICE.txt Key: HADOOP-14856 URL: https://issues.apache.org/jira/browse/HADOOP-14856 Project: Hadoop Common Issue Type: Bug Reporter: Ray Chiang Assignee: Ray Chiang Some entries needed updating in NOTICE.txt. Found these while working on HADOOP-14647. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14841) Let KMS Client retry 'No content to map' EOFExceptions
[ https://issues.apache.org/jira/browse/HADOOP-14841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-14841: --- Attachment: HADOOP-14841.02.patch Uploading a patch to address Steve's comments. Though as HADOOP-14521 turns out, we may need to update that one first and may not need the RetriableException > Let KMS Client retry 'No content to map' EOFExceptions > -- > > Key: HADOOP-14841 > URL: https://issues.apache.org/jira/browse/HADOOP-14841 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Affects Versions: 2.6.0 >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-14841.01.patch, HADOOP-14841.02.patch > > > We have seen quite some occurrences when the KMS server is stressed, some of > the requests would end up getting a 500 return code, with this in the server > log: > {noformat} > 2017-08-31 06:45:33,021 WARN org.apache.hadoop.crypto.key.kms.server.KMS: > User impala/HOSTNAME@REALM (auth:KERBEROS) request POST > https://HOSTNAME:16000/kms/v1/keyversion/MNHDKEdWtZWM4vPb0p2bw544vdSRB2gy7APAQURcZns/_eek?eek_op=decrypt > caused exception. > java.io.EOFException: No content to map to Object due to end of input > at > org.codehaus.jackson.map.ObjectMapper._initForReading(ObjectMapper.java:2444) > at > org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2396) > at > org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1648) > at > org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:54) > at > com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:474) > at > com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123) > at > com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46) > at > com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153) > at > com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203) > at > com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) > at > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) > at > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:723) > at > org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) > at > org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) > at > org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84) > at > org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) > at > org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) > at > org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:631) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:301) > at >
[jira] [Commented] (HADOOP-14847) Remove Guava Supplier and change to java Supplier in AMRMClient and AMRMClientAysnc
[ https://issues.apache.org/jira/browse/HADOOP-14847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159454#comment-16159454 ] Haibo Chen commented on HADOOP-14847: - Thanks for the reminder, [~vinodkv]. Will do now. > Remove Guava Supplier and change to java Supplier in AMRMClient and > AMRMClientAysnc > --- > > Key: HADOOP-14847 > URL: https://issues.apache.org/jira/browse/HADOOP-14847 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Blocker > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14847.patch > > > Remove the Guava library Supplier usage in user facing API's in > AMRMClient.java and AMRMClientAsync.java -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14847) Remove Guava Supplier and change to java Supplier in AMRMClient and AMRMClientAysnc
[ https://issues.apache.org/jira/browse/HADOOP-14847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated HADOOP-14847: Fix Version/s: 3.0.0-beta1 > Remove Guava Supplier and change to java Supplier in AMRMClient and > AMRMClientAysnc > --- > > Key: HADOOP-14847 > URL: https://issues.apache.org/jira/browse/HADOOP-14847 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Blocker > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14847.patch > > > Remove the Guava library Supplier usage in user facing API's in > AMRMClient.java and AMRMClientAsync.java -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14855) Hadoop scripts may errantly believe a daemon is still running, preventing it from starting
[ https://issues.apache.org/jira/browse/HADOOP-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159435#comment-16159435 ] Allen Wittenauer commented on HADOOP-14855: --- (I'm having a total deja vu moment right now. I wish I could remember who else I discussed this issue with a few years ago. haha.) It reduces the size of the edge case from 0.5% to 0.1% (or whatever). It'll still match things like 'cat datanode.txt'. Execution speed wise, though, it's pretty expensive when one considers that we've doubled the # of forks for every start/status/stop request. That'll have an impact esp in places like QA. But giving some further thought to it... I think you're on to something that might work pretty well... hmm... off the top: {code} pspid=$(ps -fp "${pid}" 2>/dev/null) if [[ $? -ne 0]]; then if [[ ${pspid} =~ Dproc_${daemonname} ]]; then {code} or whatever. [e.g., that $? construction has issues.] I think that'd be nearly the same cost as we have now and doesn't make the edge-case situation more expensive. It also avoids the IO that's very tempting by writing the ps output to a temp file. The 'grep' is replaced by an internal regex check and lsince 3.x consistently defines proc_ for jps usage we can bounce off of that to reduce the search space even more. It's still not foolproof, but it does cut down the chances of false positives. It's just a matter of if it's worth it or not. BTW, there are some other patches out there regarding this code but I haven't had a chance to really play with the edge cases. (and there are a lot.) > Hadoop scripts may errantly believe a daemon is still running, preventing it > from starting > -- > > Key: HADOOP-14855 > URL: https://issues.apache.org/jira/browse/HADOOP-14855 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0-alpha4 >Reporter: Aaron T. Myers > > I encountered a case recently where the NN wouldn't start, with the error > message "namenode is running as process 16769. Stop it first." In fact the > NN was not running at all, but rather another long-running process was > running with this pid. > It looks to me like our scripts just check to see if _any_ process is running > with the pid that the NN (or any Hadoop daemon) most recently ran with. This > is clearly not a fool-proof way of checking to see if a particular type of > daemon is now running, as some other process could start running with the > same pid since the daemon in question was previously shut down. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14847) Remove Guava Supplier and change to java Supplier in AMRMClient and AMRMClientAysnc
[ https://issues.apache.org/jira/browse/HADOOP-14847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159433#comment-16159433 ] Hudson commented on HADOOP-14847: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12827 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12827/]) HADOOP-14847. Remove Guava Supplier and change to java Supplier in (haibochen: rev 8edc60531fec4f4070955b3e82a78ba70ba40ec0) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestAMRMClientAsync.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java > Remove Guava Supplier and change to java Supplier in AMRMClient and > AMRMClientAysnc > --- > > Key: HADOOP-14847 > URL: https://issues.apache.org/jira/browse/HADOOP-14847 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14847.patch > > > Remove the Guava library Supplier usage in user facing API's in > AMRMClient.java and AMRMClientAsync.java -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14847) Remove Guava Supplier and change to java Supplier in AMRMClient and AMRMClientAysnc
[ https://issues.apache.org/jira/browse/HADOOP-14847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159416#comment-16159416 ] Vinod Kumar Vavilapalli commented on HADOOP-14847: -- [~haibochen], you forgot to set the fix-version. > Remove Guava Supplier and change to java Supplier in AMRMClient and > AMRMClientAysnc > --- > > Key: HADOOP-14847 > URL: https://issues.apache.org/jira/browse/HADOOP-14847 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14847.patch > > > Remove the Guava library Supplier usage in user facing API's in > AMRMClient.java and AMRMClientAsync.java -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14798) Update sshd-core and related mina-core library versions
[ https://issues.apache.org/jira/browse/HADOOP-14798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159409#comment-16159409 ] Ray Chiang commented on HADOOP-14798: - sshd-core and mina-core are not bundled with Hadoop, so no L check is needed. > Update sshd-core and related mina-core library versions > --- > > Key: HADOOP-14798 > URL: https://issues.apache.org/jira/browse/HADOOP-14798 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14798.001.patch > > > Update the dependencies > org.apache.mina:mina-core:2.0.0-M5 > org.apache.sshd:sshd-core:0.14.0 > mina-core can be updated to 2.0.16 and sshd-core to 1.6.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14799) Update nimbus-jose-jwt to 4.41.1
[ https://issues.apache.org/jira/browse/HADOOP-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159410#comment-16159410 ] Ray Chiang commented on HADOOP-14799: - nimbus-jose-jwt is still APLv2 and does not have a NOTICE file. > Update nimbus-jose-jwt to 4.41.1 > > > Key: HADOOP-14799 > URL: https://issues.apache.org/jira/browse/HADOOP-14799 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14799.001.patch, HADOOP-14799.002.patch > > > Update the dependency > com.nimbusds:nimbus-jose-jwt:3.9 > to the latest (4.41.1) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14648) Bump commons-configuration2 to 2.1.1
[ https://issues.apache.org/jira/browse/HADOOP-14648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159405#comment-16159405 ] Ray Chiang commented on HADOOP-14648: - commons-configuration2 is still APLv2 and only has the default NOTICE file. > Bump commons-configuration2 to 2.1.1 > > > Key: HADOOP-14648 > URL: https://issues.apache.org/jira/browse/HADOOP-14648 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14648.001.patch > > > Update the dependency > org.apache.commons: commons-configuration2: 2.1 > to the latest (2.1.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14796) Update json-simple version to 1.1.1
[ https://issues.apache.org/jira/browse/HADOOP-14796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159402#comment-16159402 ] Ray Chiang commented on HADOOP-14796: - json-simple is still APLv2 and has no NOTICE file. > Update json-simple version to 1.1.1 > --- > > Key: HADOOP-14796 > URL: https://issues.apache.org/jira/browse/HADOOP-14796 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14796.001.patch > > > Update the dependency > com.googlecode.json-simple:json-simple:1.1 > to the latest (1.1.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14655) Update httpcore version to 4.4.6
[ https://issues.apache.org/jira/browse/HADOOP-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159401#comment-16159401 ] Ray Chiang commented on HADOOP-14655: - httpcore is still APLv2 and has no NOTICE file. > Update httpcore version to 4.4.6 > > > Key: HADOOP-14655 > URL: https://issues.apache.org/jira/browse/HADOOP-14655 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14655.001.patch > > > Update the dependency > org.apache.httpcomponents:httpcore:4.4.4 > to the latest (4.4.6). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14654) Update httpclient version to 4.5.3
[ https://issues.apache.org/jira/browse/HADOOP-14654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159399#comment-16159399 ] Ray Chiang commented on HADOOP-14654: - httpclient is still APLv2 and has no NOTICE file. > Update httpclient version to 4.5.3 > -- > > Key: HADOOP-14654 > URL: https://issues.apache.org/jira/browse/HADOOP-14654 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14654.001.patch > > > Update the dependency > org.apache.httpcomponents:httpclient:4.5.2 > to the latest (4.5.3). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14847) Remove Guava Supplier and change to java Supplier in AMRMClient and AMRMClientAysnc
[ https://issues.apache.org/jira/browse/HADOOP-14847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated HADOOP-14847: Resolution: Fixed Hadoop Flags: Incompatible change,Reviewed (was: Incompatible change) Status: Resolved (was: Patch Available) Thanks [~bharatviswa] for your contribution. I have committed it to trunk and branch-3.0 > Remove Guava Supplier and change to java Supplier in AMRMClient and > AMRMClientAysnc > --- > > Key: HADOOP-14847 > URL: https://issues.apache.org/jira/browse/HADOOP-14847 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Blocker > Attachments: HADOOP-14847.patch > > > Remove the Guava library Supplier usage in user facing API's in > AMRMClient.java and AMRMClientAsync.java -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14520) WASB: Block compaction for Azure Block Blobs
[ https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159400#comment-16159400 ] Hadoop QA commented on HADOOP-14520: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 13s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 37s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 19s{color} | {color:red} hadoop-tools_hadoop-azure generated 3 new + 5 unchanged - 0 fixed = 8 total (was 5) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 19s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 15m 43s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:eaf5c66 | | JIRA Issue | HADOOP-14520 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886163/hadoop-14520-branch-2-010.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 302bf5eab3e0 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | branch-2 / 1421196 | | Default Java | 1.7.0_131 | | findbugs | v3.0.0 | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/13214/artifact/patchprocess/diff-compile-javac-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13214/testReport/ | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13214/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > WASB: Block compaction for Azure Block Blobs > > > Key: HADOOP-14520 > URL: https://issues.apache.org/jira/browse/HADOOP-14520 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 2.7.4 >Reporter: Georgi Chalakov >Assignee: Georgi Chalakov > Attachments: HADOOP-14520-006.patch,
[jira] [Commented] (HADOOP-14653) Update joda-time version to 2.9.9
[ https://issues.apache.org/jira/browse/HADOOP-14653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159393#comment-16159393 ] Ray Chiang commented on HADOOP-14653: - joda-time is not bundled with Hadoop. L check shows nothing. > Update joda-time version to 2.9.9 > - > > Key: HADOOP-14653 > URL: https://issues.apache.org/jira/browse/HADOOP-14653 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14653.001.patch > > > Update the dependency > joda-time:joda-time:2.9.4 > to the latest (2.9.9). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14652) Update metrics-core version to 3.2.3
[ https://issues.apache.org/jira/browse/HADOOP-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14652: Attachment: HADOOP-14652.003.patch * Add information to NOTICE.txt. > Update metrics-core version to 3.2.3 > > > Key: HADOOP-14652 > URL: https://issues.apache.org/jira/browse/HADOOP-14652 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14652.001.patch, HADOOP-14652.002.patch, > HADOOP-14652.003.patch > > > The current artifact is: > com.codehale.metrics:metrics-core:3.0.1 > That version could either be bumped to 3.0.2 (the latest of that line), or > use the latest artifact: > io.dropwizard.metrics:metrics-core:3.2.3 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14844) Remove requirement to specify TenantGuid for MSI Token Provider
[ https://issues.apache.org/jira/browse/HADOOP-14844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-14844: Resolution: Fixed Fix Version/s: 3.1.0 2.8.3 3.0.0-beta1 2.9.0 Status: Resolved (was: Patch Available) Committed to trunk, branch-3.0, branch-2, and branch-2.8. Thanks [~ASikaria] for the contribution! > Remove requirement to specify TenantGuid for MSI Token Provider > --- > > Key: HADOOP-14844 > URL: https://issues.apache.org/jira/browse/HADOOP-14844 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/adl >Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.3, 3.1.0 >Reporter: Atul Sikaria >Assignee: Atul Sikaria > Fix For: 2.9.0, 3.0.0-beta1, 2.8.3, 3.1.0 > > Attachments: HADOOP-14844.001.patch, HADOOP-14844.002.patch, > HADOOP-14844.003.patch, HADOOP-14844.004.patch > > > The MSI identity extension on Azure VMs has removed the need to specify the > tenant guid as part of the request to retrieve token from MSI service on the > local VM. This means the tenant guid configuration parameter is not needed > anymore. This change removes the redundant configuration parameter. > It also makes the port number optional - if not specified, then the default > port is used by the ADLS SDK (happens to be 50342, but that is transparent to > Hadoop code). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13600) S3a rename() to copy files in a directory in parallel
[ https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159377#comment-16159377 ] Sahil Takiar commented on HADOOP-13600: --- Rebased this patch and made some bug fixes: * Details of how the code work are in the PR: https://github.com/apache/hadoop/pull/157 * Ran the existing tests (except for the S3Guard ITests because I need to get DynamoDb access) ** Ran the unit tests, itests, and scale tests and they all pass ** There seems to already be an existing scale test that stresses this part of the code {{ITestS3ADeleteManyFiles#testBulkRenameAndDelete}} creates a bunch of files and then renames them * I haven't written other ITests because wanted to get some input on whether additional ones are necessary * Planning to add some unit tests (using lots of mocks) once I get a thumbs up on the overall approach [~fabbri], [~mackrorysd] could you take a look? > S3a rename() to copy files in a directory in parallel > - > > Key: HADOOP-13600 > URL: https://issues.apache.org/jira/browse/HADOOP-13600 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Sahil Takiar > > Currently a directory rename does a one-by-one copy, making the request > O(files * data). If the copy operations were launched in parallel, the > duration of the copy may be reducable to the duration of the longest copy. > For a directory with many files, this will be significant -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14851) LambdaTestUtils.eventually() doesn't spin on Assertion failures
[ https://issues.apache.org/jira/browse/HADOOP-14851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159355#comment-16159355 ] Hadoop QA commented on HADOOP-14851: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 48s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 34s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 15s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 59m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestRaceWhenRelogin | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14851 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886153/HADOOP-14851-002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 689a84f6343e 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a323f73 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/13213/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13213/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13213/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13213/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > LambdaTestUtils.eventually() doesn't spin on Assertion failures > --- > > Key: HADOOP-14851 > URL:
[jira] [Updated] (HADOOP-14520) WASB: Block compaction for Azure Block Blobs
[ https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Georgi Chalakov updated HADOOP-14520: - Affects Version/s: (was: 3.0.0-alpha3) 2.7.4 Status: Patch Available (was: Open) > WASB: Block compaction for Azure Block Blobs > > > Key: HADOOP-14520 > URL: https://issues.apache.org/jira/browse/HADOOP-14520 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 2.7.4 >Reporter: Georgi Chalakov >Assignee: Georgi Chalakov > Attachments: HADOOP-14520-006.patch, HADOOP-14520-008.patch, > HADOOP-14520-009.patch, HADOOP-14520-05.patch, HADOOP_14520_07.patch, > HADOOP_14520_08.patch, HADOOP_14520_09.patch, HADOOP_14520_10.patch, > hadoop-14520-branch-2-010.patch, HADOOP-14520-patch-07-08.diff, > HADOOP-14520-patch-07-09.diff > > > Block Compaction for WASB allows uploading new blocks for every hflush/hsync > call. When the number of blocks is above 32000, next hflush/hsync triggers > the block compaction process. Block compaction replaces a sequence of blocks > with one block. From all the sequences with total length less than 4M, > compaction chooses the longest one. It is a greedy algorithm that preserve > all potential candidates for the next round. Block Compaction for WASB > increases data durability and allows using block blobs instead of page blobs. > By default, block compaction is disabled. Similar to the configuration for > page blobs, the client needs to specify HDFS folders where block compaction > over block blobs is enabled. > Results for HADOOP_14520_07.patch > tested endpoint: fs.azure.account.key.hdfs4.blob.core.windows.net > Tests run: 777, Failures: 0, Errors: 0, Skipped: 155 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14520) WASB: Block compaction for Azure Block Blobs
[ https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Georgi Chalakov updated HADOOP-14520: - Status: Open (was: Patch Available) > WASB: Block compaction for Azure Block Blobs > > > Key: HADOOP-14520 > URL: https://issues.apache.org/jira/browse/HADOOP-14520 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 3.0.0-alpha3 >Reporter: Georgi Chalakov >Assignee: Georgi Chalakov > Attachments: HADOOP-14520-006.patch, HADOOP-14520-008.patch, > HADOOP-14520-009.patch, HADOOP-14520-05.patch, HADOOP_14520_07.patch, > HADOOP_14520_08.patch, HADOOP_14520_09.patch, HADOOP_14520_10.patch, > hadoop-14520-branch-2-010.patch, HADOOP-14520-patch-07-08.diff, > HADOOP-14520-patch-07-09.diff > > > Block Compaction for WASB allows uploading new blocks for every hflush/hsync > call. When the number of blocks is above 32000, next hflush/hsync triggers > the block compaction process. Block compaction replaces a sequence of blocks > with one block. From all the sequences with total length less than 4M, > compaction chooses the longest one. It is a greedy algorithm that preserve > all potential candidates for the next round. Block Compaction for WASB > increases data durability and allows using block blobs instead of page blobs. > By default, block compaction is disabled. Similar to the configuration for > page blobs, the client needs to specify HDFS folders where block compaction > over block blobs is enabled. > Results for HADOOP_14520_07.patch > tested endpoint: fs.azure.account.key.hdfs4.blob.core.windows.net > Tests run: 777, Failures: 0, Errors: 0, Skipped: 155 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14520) WASB: Block compaction for Azure Block Blobs
[ https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159350#comment-16159350 ] Georgi Chalakov commented on HADOOP-14520: -- Thanks for the review Steve! I have attached the patch for branch-2: hadoop-14520-branch-2-010.patch Results from endpoint: fs.azure.account.key.hdfs4.blob.core.windows.net Tests run: 774, Failures: 0, Errors: 0, Skipped: 131 > WASB: Block compaction for Azure Block Blobs > > > Key: HADOOP-14520 > URL: https://issues.apache.org/jira/browse/HADOOP-14520 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 3.0.0-alpha3 >Reporter: Georgi Chalakov >Assignee: Georgi Chalakov > Attachments: HADOOP-14520-006.patch, HADOOP-14520-008.patch, > HADOOP-14520-009.patch, HADOOP-14520-05.patch, HADOOP_14520_07.patch, > HADOOP_14520_08.patch, HADOOP_14520_09.patch, HADOOP_14520_10.patch, > hadoop-14520-branch-2-010.patch, HADOOP-14520-patch-07-08.diff, > HADOOP-14520-patch-07-09.diff > > > Block Compaction for WASB allows uploading new blocks for every hflush/hsync > call. When the number of blocks is above 32000, next hflush/hsync triggers > the block compaction process. Block compaction replaces a sequence of blocks > with one block. From all the sequences with total length less than 4M, > compaction chooses the longest one. It is a greedy algorithm that preserve > all potential candidates for the next round. Block Compaction for WASB > increases data durability and allows using block blobs instead of page blobs. > By default, block compaction is disabled. Similar to the configuration for > page blobs, the client needs to specify HDFS folders where block compaction > over block blobs is enabled. > Results for HADOOP_14520_07.patch > tested endpoint: fs.azure.account.key.hdfs4.blob.core.windows.net > Tests run: 777, Failures: 0, Errors: 0, Skipped: 155 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14520) WASB: Block compaction for Azure Block Blobs
[ https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Georgi Chalakov updated HADOOP-14520: - Attachment: hadoop-14520-branch-2-010.patch > WASB: Block compaction for Azure Block Blobs > > > Key: HADOOP-14520 > URL: https://issues.apache.org/jira/browse/HADOOP-14520 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 3.0.0-alpha3 >Reporter: Georgi Chalakov >Assignee: Georgi Chalakov > Attachments: HADOOP-14520-006.patch, HADOOP-14520-008.patch, > HADOOP-14520-009.patch, HADOOP-14520-05.patch, HADOOP_14520_07.patch, > HADOOP_14520_08.patch, HADOOP_14520_09.patch, HADOOP_14520_10.patch, > hadoop-14520-branch-2-010.patch, HADOOP-14520-patch-07-08.diff, > HADOOP-14520-patch-07-09.diff > > > Block Compaction for WASB allows uploading new blocks for every hflush/hsync > call. When the number of blocks is above 32000, next hflush/hsync triggers > the block compaction process. Block compaction replaces a sequence of blocks > with one block. From all the sequences with total length less than 4M, > compaction chooses the longest one. It is a greedy algorithm that preserve > all potential candidates for the next round. Block Compaction for WASB > increases data durability and allows using block blobs instead of page blobs. > By default, block compaction is disabled. Similar to the configuration for > page blobs, the client needs to specify HDFS folders where block compaction > over block blobs is enabled. > Results for HADOOP_14520_07.patch > tested endpoint: fs.azure.account.key.hdfs4.blob.core.windows.net > Tests run: 777, Failures: 0, Errors: 0, Skipped: 155 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14855) Hadoop scripts may errantly believe a daemon is still running, preventing it from starting
[ https://issues.apache.org/jira/browse/HADOOP-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159295#comment-16159295 ] Aaron T. Myers commented on HADOOP-14855: - Actually, [~aw], here's a lightweight suggestion to make this check at least much more robust, if not quite foolproof. The current code that does this just checks to see if a process is running with the pid in question. But we also know the name of the daemon we're checking on, so couldn't we pretty easily make this check more robust by also grepping for the name of the daemon in the {{`ps'}} output for the pid in question? That would take an already rare issue and make it _exceptionally_ unlikely to result in a false positive, and without adding any additional dependencies beyond grep. Specifically, I'm thinking replace this line: {code} if ps -p "${pid}" > /dev/null 2>&1; then {code} With something like this: {code} if ps -fp "${pid}" | grep "${daemonname}" > /dev/null 2>&1; then {code} Total shell scripting newbie here, so please feel free to tell me that this is way off base. > Hadoop scripts may errantly believe a daemon is still running, preventing it > from starting > -- > > Key: HADOOP-14855 > URL: https://issues.apache.org/jira/browse/HADOOP-14855 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0-alpha4 >Reporter: Aaron T. Myers > > I encountered a case recently where the NN wouldn't start, with the error > message "namenode is running as process 16769. Stop it first." In fact the > NN was not running at all, but rather another long-running process was > running with this pid. > It looks to me like our scripts just check to see if _any_ process is running > with the pid that the NN (or any Hadoop daemon) most recently ran with. This > is clearly not a fool-proof way of checking to see if a particular type of > daemon is now running, as some other process could start running with the > same pid since the daemon in question was previously shut down. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14844) Remove requirement to specify TenantGuid for MSI Token Provider
[ https://issues.apache.org/jira/browse/HADOOP-14844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159258#comment-16159258 ] Hudson commented on HADOOP-14844: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12826 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12826/]) HADOOP-14844. Remove requirement to specify TenantGuid for MSI Token (jzhuge: rev a4661850c1e0794baf493a468191e12681d68ab4) * (edit) hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java * (edit) hadoop-tools/hadoop-azure-datalake/src/site/markdown/index.md * (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml * (edit) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestAzureADTokenProvider.java * (edit) hadoop-tools/hadoop-azure-datalake/pom.xml * (edit) hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlConfKeys.java > Remove requirement to specify TenantGuid for MSI Token Provider > --- > > Key: HADOOP-14844 > URL: https://issues.apache.org/jira/browse/HADOOP-14844 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/adl >Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.3, 3.1.0 >Reporter: Atul Sikaria >Assignee: Atul Sikaria > Attachments: HADOOP-14844.001.patch, HADOOP-14844.002.patch, > HADOOP-14844.003.patch, HADOOP-14844.004.patch > > > The MSI identity extension on Azure VMs has removed the need to specify the > tenant guid as part of the request to retrieve token from MSI service on the > local VM. This means the tenant guid configuration parameter is not needed > anymore. This change removes the redundant configuration parameter. > It also makes the port number optional - if not specified, then the default > port is used by the ADLS SDK (happens to be 50342, but that is transparent to > Hadoop code). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13948) Create automated scripts to update LICENSE/NOTICE
[ https://issues.apache.org/jira/browse/HADOOP-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159253#comment-16159253 ] Xiao Chen commented on HADOOP-13948: Hi [~andrew.wang], Sorry to inform you I think this is won't be done for beta 1. Currently the manual steps are: # download each tabs from the spreadsheet, and put into the scripts dir # run the scripts, examine outputs from step3.sh # for any warnings, add the dependency to the spreadsheet, with the googled out L for that dependency, then redo this process # with no warnings, run step4.sh. Look at the generated L, and copy into the existing one. My initial intention was to switch to a more friendly approach, so the the manual steps of #1 and #4 are taken out. Has been occupied with other things, so this didn't happen. I think we can: - push this to beta2, and I will try to squeeze time for it - commit this as a experimental tool, and improve it later. > Create automated scripts to update LICENSE/NOTICE > - > > Key: HADOOP-13948 > URL: https://issues.apache.org/jira/browse/HADOOP-13948 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Critical > Attachments: HADOOP-13948.01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14521) KMS client needs retry logic
[ https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159248#comment-16159248 ] Xiao Chen commented on HADOOP-14521: Thanks a lot [~djp] for the quick actions! > KMS client needs retry logic > > > Key: HADOOP-14521 > URL: https://issues.apache.org/jira/browse/HADOOP-14521 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Fix For: 2.9.0, 3.0.0-beta1, 2.8.3 > > Attachments: HADOOP-14521.09.patch, > HADOOP-14521-branch-2.8.002.patch, HADOOP-14521-branch-2.8.2.patch, > HADOOP-14521-trunk-10.patch, HDFS-11804-branch-2.8.patch, > HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, HDFS-11804-trunk-3.patch, > HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, HDFS-11804-trunk-6.patch, > HDFS-11804-trunk-7.patch, HDFS-11804-trunk-8.patch, HDFS-11804-trunk.patch > > > The kms client appears to have no retry logic – at all. It's completely > decoupled from the ipc retry logic. This has major impacts if the KMS is > unreachable for any reason, including but not limited to network connection > issues, timeouts, the +restart during an upgrade+. > This has some major ramifications: > # Jobs may fail to submit, although oozie resubmit logic should mask it > # Non-oozie launchers may experience higher rates if they do not already have > retry logic. > # Tasks reading EZ files will fail, probably be masked by framework reattempts > # EZ file creation fails after creating a 0-length file – client receives > EDEK in the create response, then fails when decrypting the EDEK > # Bulk hadoop fs copies, and maybe distcp, will prematurely fail -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14855) Hadoop scripts may errantly believe a daemon is still running, preventing it from starting
[ https://issues.apache.org/jira/browse/HADOOP-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159246#comment-16159246 ] Aaron T. Myers commented on HADOOP-14855: - Gotcha, all makes sense. I figured I wasn't the first person to encounter this, but couldn't find the JIRA. I'll go ahead and close this one as a dupe of HADOOP-9085. > Hadoop scripts may errantly believe a daemon is still running, preventing it > from starting > -- > > Key: HADOOP-14855 > URL: https://issues.apache.org/jira/browse/HADOOP-14855 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0-alpha4 >Reporter: Aaron T. Myers > > I encountered a case recently where the NN wouldn't start, with the error > message "namenode is running as process 16769. Stop it first." In fact the > NN was not running at all, but rather another long-running process was > running with this pid. > It looks to me like our scripts just check to see if _any_ process is running > with the pid that the NN (or any Hadoop daemon) most recently ran with. This > is clearly not a fool-proof way of checking to see if a particular type of > daemon is now running, as some other process could start running with the > same pid since the daemon in question was previously shut down. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14855) Hadoop scripts may errantly believe a daemon is still running, preventing it from starting
[ https://issues.apache.org/jira/browse/HADOOP-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159228#comment-16159228 ] Allen Wittenauer commented on HADOOP-14855: --- This is a dupe of HADOOP-9085 (and it's buddy HADOOP-9086). [~ste...@apache.org]'s comments are spot on, with this being the key one: bq. What we need to do is move away from pid-file-liveness tests altogether. Unfortunately, we're using Java. Doing liveliness checks anywhere but in bash is either extremely expensive due to the massive classpath or non-portable/introduces more environmental dependencies. Other thoughts: 1) These types of pid clashes are more on the edge case/rare side. They just generally aren't worth spending the effort on. 2) Given user-functions and shell profiles, it's possible for end users (or vendors) to replace the pid checking/handling on their own. I'm expecting experienced admins to replace it with daemontools and the like. > Hadoop scripts may errantly believe a daemon is still running, preventing it > from starting > -- > > Key: HADOOP-14855 > URL: https://issues.apache.org/jira/browse/HADOOP-14855 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0-alpha4 >Reporter: Aaron T. Myers > > I encountered a case recently where the NN wouldn't start, with the error > message "namenode is running as process 16769. Stop it first." In fact the > NN was not running at all, but rather another long-running process was > running with this pid. > It looks to me like our scripts just check to see if _any_ process is running > with the pid that the NN (or any Hadoop daemon) most recently ran with. This > is clearly not a fool-proof way of checking to see if a particular type of > daemon is now running, as some other process could start running with the > same pid since the daemon in question was previously shut down. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14797) Update re2j version to 1.1
[ https://issues.apache.org/jira/browse/HADOOP-14797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159221#comment-16159221 ] Hadoop QA commented on HADOOP-14797: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 11s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 0s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}140m 11s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.crypto.key.kms.server.TestKMS | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14797 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886127/HADOOP-14797.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 190279b1e676 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c35510a | | Default Java | 1.8.0_144 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13212/artifact/patchprocess/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13212/testReport/ | | modules | C: hadoop-project . U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13212/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Update re2j version to 1.1 > -- > > Key: HADOOP-14797 > URL: https://issues.apache.org/jira/browse/HADOOP-14797 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14797.001.patch, HADOOP-14797.002.patch > > > Update the dependency > com.google.re2j:re2j:1.0 > to the latest (1.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HADOOP-14842) Hadoop 2.8.2 release build process get stuck due to java issue
[ https://issues.apache.org/jira/browse/HADOOP-14842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated HADOOP-14842: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.2 2.9.0 Status: Resolved (was: Patch Available) Thanks [~xgong] for quick review. Commit it to branch-2, branch-2.8 and branch-2.8.2. > Hadoop 2.8.2 release build process get stuck due to java issue > -- > > Key: HADOOP-14842 > URL: https://issues.apache.org/jira/browse/HADOOP-14842 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Junping Du >Assignee: Junping Du >Priority: Blocker > Fix For: 2.9.0, 2.8.2 > > Attachments: HADOOP-14842-branch-2.8-2.002.patch, > HADOOP-14842.branch-2.8-2.patch > > > In my latest 2.8.2 release build (via docker) get failed, and following > errors received: > > {noformat} > "/usr/bin/mvn -Dmaven.repo.local=/maven -pl hadoop-maven-plugins -am clean > install > Error: JAVA_HOME is not defined correctly. We cannot execute > /usr/lib/jvm/java-7-oracle/bin/java" > {noformat} > This looks like related to HADOOP-14474. However, reverting that patch > doesn't work here because build progress will get failed earlier in java > download/installation - may be as mentioned in HADOOP-14474, some java 7 > download address get changed by Oracle. > Hard coding my local JAVA_HOME to create-release or Dockerfile doesn't work > here although it show correct java home. My suspect so far is we still need > to download java 7 from somewhere to make build progress continue in docker > building process, but haven't got clue to go through this. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14521) KMS client needs retry logic
[ https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159210#comment-16159210 ] Junping Du commented on HADOOP-14521: - I have revert the patch from branch-2.8.2. Mark fix version to 2.8.3 instead. > KMS client needs retry logic > > > Key: HADOOP-14521 > URL: https://issues.apache.org/jira/browse/HADOOP-14521 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Fix For: 2.9.0, 3.0.0-beta1, 2.8.3 > > Attachments: HADOOP-14521.09.patch, > HADOOP-14521-branch-2.8.002.patch, HADOOP-14521-branch-2.8.2.patch, > HADOOP-14521-trunk-10.patch, HDFS-11804-branch-2.8.patch, > HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, HDFS-11804-trunk-3.patch, > HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, HDFS-11804-trunk-6.patch, > HDFS-11804-trunk-7.patch, HDFS-11804-trunk-8.patch, HDFS-11804-trunk.patch > > > The kms client appears to have no retry logic – at all. It's completely > decoupled from the ipc retry logic. This has major impacts if the KMS is > unreachable for any reason, including but not limited to network connection > issues, timeouts, the +restart during an upgrade+. > This has some major ramifications: > # Jobs may fail to submit, although oozie resubmit logic should mask it > # Non-oozie launchers may experience higher rates if they do not already have > retry logic. > # Tasks reading EZ files will fail, probably be masked by framework reattempts > # EZ file creation fails after creating a 0-length file – client receives > EDEK in the create response, then fails when decrypting the EDEK > # Bulk hadoop fs copies, and maybe distcp, will prematurely fail -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14521) KMS client needs retry logic
[ https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated HADOOP-14521: Fix Version/s: (was: 2.8.2) 2.8.3 > KMS client needs retry logic > > > Key: HADOOP-14521 > URL: https://issues.apache.org/jira/browse/HADOOP-14521 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Fix For: 2.9.0, 3.0.0-beta1, 2.8.3 > > Attachments: HADOOP-14521.09.patch, > HADOOP-14521-branch-2.8.002.patch, HADOOP-14521-branch-2.8.2.patch, > HADOOP-14521-trunk-10.patch, HDFS-11804-branch-2.8.patch, > HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, HDFS-11804-trunk-3.patch, > HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, HDFS-11804-trunk-6.patch, > HDFS-11804-trunk-7.patch, HDFS-11804-trunk-8.patch, HDFS-11804-trunk.patch > > > The kms client appears to have no retry logic – at all. It's completely > decoupled from the ipc retry logic. This has major impacts if the KMS is > unreachable for any reason, including but not limited to network connection > issues, timeouts, the +restart during an upgrade+. > This has some major ramifications: > # Jobs may fail to submit, although oozie resubmit logic should mask it > # Non-oozie launchers may experience higher rates if they do not already have > retry logic. > # Tasks reading EZ files will fail, probably be masked by framework reattempts > # EZ file creation fails after creating a 0-length file – client receives > EDEK in the create response, then fails when decrypting the EDEK > # Bulk hadoop fs copies, and maybe distcp, will prematurely fail -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14389) Exception handling is incorrect in KerberosName.java
[ https://issues.apache.org/jira/browse/HADOOP-14389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159208#comment-16159208 ] Andras Bokor commented on HADOOP-14389: --- Any other thoughts from somebody? > Exception handling is incorrect in KerberosName.java > > > Key: HADOOP-14389 > URL: https://issues.apache.org/jira/browse/HADOOP-14389 > Project: Hadoop Common > Issue Type: Bug >Reporter: Andras Bokor >Assignee: Andras Bokor > Labels: supportability > Attachments: HADOOP-14389.01.patch, HADOOP-14389.02.patch > > > I found multiple inconsistency: > Rule: {{RULE:\[2:$1/$2\@$3\](.\*)s/.\*/hdfs/}} > Principal: {{nn/host.dom...@realm.tld}} > Expected exception: {{BadStringFormat: ...3 is out of range...}} > Actual exception: {{ArrayIndexOutOfBoundsException: 3}} > > Rule: {{RULE:\[:$1/$2\@$0](.\*)s/.\*/hdfs/}} (Missing num of components) > Expected: {{IllegalArgumentException}} > Actual: {{java.lang.NumberFormatException: For input string: ""}} > > Rule: {{RULE:\[2:$-1/$2\@$3\](.\*)s/.\*/hdfs/}} > Expected {{BadStringFormat: -1 is outside of valid range...}} > Actual: {{java.lang.NumberFormatException: For input string: ""}} > > Rule: {{RULE:\[2:$one/$2\@$3\](.\*)s/.\*/hdfs/}} > Expected {{java.lang.NumberFormatException: For input string: "one"}} > Acutal {{java.lang.NumberFormatException: For input string: ""}} > > In addtion: > {code}[^\\]]{code} > does not really make sense in {{ruleParser}}. Most probably it was needed > because we parse the whole rule string and remove the parsed rule from > beginning of the string: {{KerberosName#parseRules}}. This made the regex > engine parse wrong without it. > In addition: > In tests some corner cases are not covered. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14854) DistCp should not issue file status calls for files in the filter list
[ https://issues.apache.org/jira/browse/HADOOP-14854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14854: Affects Version/s: 2.8.1 Priority: Minor (was: Major) Component/s: tools/distcp Issue Type: Improvement (was: Bug) > DistCp should not issue file status calls for files in the filter list > -- > > Key: HADOOP-14854 > URL: https://issues.apache.org/jira/browse/HADOOP-14854 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.8.1 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Minor > > DistCp currently excludes the files in the filter list only when the files > are added to the copy list. > However distcp can be optimized by not issuing file status/get attr calls for > the files in the filter. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14851) LambdaTestUtils.eventually() doesn't spin on Assertion failures
[ https://issues.apache.org/jira/browse/HADOOP-14851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14851: Attachment: HADOOP-14851-002.patch Patch 002; address checkstyle issues. Tested the hadoop-aws module with this patch, as we've been using the operations there in the new tests. All seems well > LambdaTestUtils.eventually() doesn't spin on Assertion failures > --- > > Key: HADOOP-14851 > URL: https://issues.apache.org/jira/browse/HADOOP-14851 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14851-001.patch, HADOOP-14851-002.patch > > > This is funny. The {{LsmbdaTestUtils.eventually()}} method, meant to spin > until a closure stops raising exceptions, doesn't catch {{Error}} and > subclasses, so doesn't fail on an {{Assert.assert()}} failure, which raises > an {{AssertionError}}. My bad :) > Example: > {code} > eventually(TIMEOUT, > () -> { > while (counter.incrementAndGet() < 5) { > assert false : "oops"; > } > }, > retryLogic); > {code} > Fix: catch Throwable, rethrow. Needs to add VirtualMachineError & subclasses > to the set of errors not to spin on (OOM, stack overflow, ...) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14851) LambdaTestUtils.eventually() doesn't spin on Assertion failures
[ https://issues.apache.org/jira/browse/HADOOP-14851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159174#comment-16159174 ] Steve Loughran commented on HADOOP-14851: - Test failure seems impossible to associate with this code, as the test which fails doesn't use the edited code. And it works locally. {code} java.lang.AssertionError: null at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.hadoop.metrics2.sink.TestFileSink.testFileSink(TestFileSink.java:135) {code} checkstyle complaints are about layout of lambda-expressions; wrapping the calls up to keep it lean and readable without checkstyle overreacting > LambdaTestUtils.eventually() doesn't spin on Assertion failures > --- > > Key: HADOOP-14851 > URL: https://issues.apache.org/jira/browse/HADOOP-14851 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14851-001.patch > > > This is funny. The {{LsmbdaTestUtils.eventually()}} method, meant to spin > until a closure stops raising exceptions, doesn't catch {{Error}} and > subclasses, so doesn't fail on an {{Assert.assert()}} failure, which raises > an {{AssertionError}}. My bad :) > Example: > {code} > eventually(TIMEOUT, > () -> { > while (counter.incrementAndGet() < 5) { > assert false : "oops"; > } > }, > retryLogic); > {code} > Fix: catch Throwable, rethrow. Needs to add VirtualMachineError & subclasses > to the set of errors not to spin on (OOM, stack overflow, ...) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13271) Intermittent failure of TestS3AContractRootDir.testListEmptyRootDirectory
[ https://issues.apache.org/jira/browse/HADOOP-13271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-13271: --- Assignee: Steve Loughran > Intermittent failure of TestS3AContractRootDir.testListEmptyRootDirectory > - > > Key: HADOOP-13271 > URL: https://issues.apache.org/jira/browse/HADOOP-13271 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > > I'm seeing an intermittent failure of > {{TestS3AContractRootDir.testListEmptyRootDirectory}} > The sequence of : deleteFiles(listStatus(Path("/)")) is failing because the > file to delete is root ...yet the code is passing in the children of /, not / > itself. > hypothesis: when you call listStatus on an empty root dir, you get a file > entry back that says isFile, not isDirectory. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13271) Intermittent failure of TestS3AContractRootDir.testListEmptyRootDirectory
[ https://issues.apache.org/jira/browse/HADOOP-13271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159161#comment-16159161 ] Steve Loughran commented on HADOOP-13271: - HADOOP-14851 is probably the reason we haven't fixed this: even though there's a loop, because eventually() isn't looping around intercepts, there's no iteration when there's a (transient) list inconsistency > Intermittent failure of TestS3AContractRootDir.testListEmptyRootDirectory > - > > Key: HADOOP-13271 > URL: https://issues.apache.org/jira/browse/HADOOP-13271 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Priority: Minor > > I'm seeing an intermittent failure of > {{TestS3AContractRootDir.testListEmptyRootDirectory}} > The sequence of : deleteFiles(listStatus(Path("/)")) is failing because the > file to delete is root ...yet the code is passing in the children of /, not / > itself. > hypothesis: when you call listStatus on an empty root dir, you get a file > entry back that says isFile, not isDirectory. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14851) LambdaTestUtils.eventually() doesn't spin on Assertion failures
[ https://issues.apache.org/jira/browse/HADOOP-14851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159122#comment-16159122 ] Hadoop QA commented on HADOOP-14851: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 48s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 38s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 7 new + 1 unchanged - 0 fixed = 8 total (was 1) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 29s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 64m 57s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.metrics2.sink.TestFileSink | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14851 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886124/HADOOP-14851-001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7182e894d89d 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c35510a | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/13211/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13211/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13211/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13211/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > LambdaTestUtils.eventually() doesn't spin on Assertion failures > --- > > Key: HADOOP-14851 > URL:
[jira] [Commented] (HADOOP-14842) Hadoop 2.8.2 release build process get stuck due to java issue
[ https://issues.apache.org/jira/browse/HADOOP-14842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159095#comment-16159095 ] Xuan Gong commented on HADOOP-14842: +1 LGTM > Hadoop 2.8.2 release build process get stuck due to java issue > -- > > Key: HADOOP-14842 > URL: https://issues.apache.org/jira/browse/HADOOP-14842 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Junping Du >Assignee: Junping Du >Priority: Blocker > Attachments: HADOOP-14842-branch-2.8-2.002.patch, > HADOOP-14842.branch-2.8-2.patch > > > In my latest 2.8.2 release build (via docker) get failed, and following > errors received: > > {noformat} > "/usr/bin/mvn -Dmaven.repo.local=/maven -pl hadoop-maven-plugins -am clean > install > Error: JAVA_HOME is not defined correctly. We cannot execute > /usr/lib/jvm/java-7-oracle/bin/java" > {noformat} > This looks like related to HADOOP-14474. However, reverting that patch > doesn't work here because build progress will get failed earlier in java > download/installation - may be as mentioned in HADOOP-14474, some java 7 > download address get changed by Oracle. > Hard coding my local JAVA_HOME to create-release or Dockerfile doesn't work > here although it show correct java home. My suspect so far is we still need > to download java 7 from somewhere to make build progress continue in docker > building process, but haven't got clue to go through this. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14650) Update jsp-api version
[ https://issues.apache.org/jira/browse/HADOOP-14650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang resolved HADOOP-14650. - Resolution: Won't Fix Assignee: Ray Chiang > Update jsp-api version > -- > > Key: HADOOP-14650 > URL: https://issues.apache.org/jira/browse/HADOOP-14650 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > > The current artifact is: > javax.servlet.jsp:jsp-api:2.1 > That version could either be bumped to 2.1.2 (the latest of that line), or > use the latest artifact: > javax.servlet.jsp:javax.servlet.jsp-api:2.3.1 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14656) Update xercesImpl version to 2.11.0
[ https://issues.apache.org/jira/browse/HADOOP-14656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang resolved HADOOP-14656. - Resolution: Won't Fix Assignee: Ray Chiang > Update xercesImpl version to 2.11.0 > --- > > Key: HADOOP-14656 > URL: https://issues.apache.org/jira/browse/HADOOP-14656 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > > Update the dependency > xerces:xercesImpl:2.9.1 > to the latest (2.11.0). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14521) KMS client needs retry logic
[ https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159084#comment-16159084 ] Junping Du commented on HADOOP-14521: - Thanks [~xiaochen] for notification here. I prefer to revert this from 2.8.2 but keep it in branch-2.8 and branch-2 for improvements later. > KMS client needs retry logic > > > Key: HADOOP-14521 > URL: https://issues.apache.org/jira/browse/HADOOP-14521 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Fix For: 2.9.0, 3.0.0-beta1, 2.8.2 > > Attachments: HADOOP-14521.09.patch, > HADOOP-14521-branch-2.8.002.patch, HADOOP-14521-branch-2.8.2.patch, > HADOOP-14521-trunk-10.patch, HDFS-11804-branch-2.8.patch, > HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, HDFS-11804-trunk-3.patch, > HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, HDFS-11804-trunk-6.patch, > HDFS-11804-trunk-7.patch, HDFS-11804-trunk-8.patch, HDFS-11804-trunk.patch > > > The kms client appears to have no retry logic – at all. It's completely > decoupled from the ipc retry logic. This has major impacts if the KMS is > unreachable for any reason, including but not limited to network connection > issues, timeouts, the +restart during an upgrade+. > This has some major ramifications: > # Jobs may fail to submit, although oozie resubmit logic should mask it > # Non-oozie launchers may experience higher rates if they do not already have > retry logic. > # Tasks reading EZ files will fail, probably be masked by framework reattempts > # EZ file creation fails after creating a 0-length file – client receives > EDEK in the create response, then fails when decrypting the EDEK > # Bulk hadoop fs copies, and maybe distcp, will prematurely fail -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13442) Optimize UGI group lookups
[ https://issues.apache.org/jira/browse/HADOOP-13442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HADOOP-13442: - Fix Version/s: 2.7.5 Just committed along with broken tests HDFS-10738 and MAPREDUCE-6750 to branch-2.7. Thank you Daryn. Updating Fix Version for all three. > Optimize UGI group lookups > -- > > Key: HADOOP-13442 > URL: https://issues.apache.org/jira/browse/HADOOP-13442 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Fix For: 2.8.0, 3.0.0-alpha1, 2.7.5 > > Attachments: HADOOP-13442.patch > > > {{UGI#getGroups}} and its usage is inefficient. The list is unnecessarily > converted to multiple collections. > For _every_ invocation, the {{List}} from the group provider is > converted into a {{LinkedHashSet}} (to de-dup), back to a > {{String[]}}. Then callers testing for group membership convert back to a > {{List}}. This should be done once to reduce allocations. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14855) Hadoop scripts may errantly believe a daemon is still running, preventing it from starting
[ https://issues.apache.org/jira/browse/HADOOP-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159068#comment-16159068 ] Aaron T. Myers commented on HADOOP-14855: - [~aw] - does this ring any bells for you? Any thoughts on how to make this check more robust? > Hadoop scripts may errantly believe a daemon is still running, preventing it > from starting > -- > > Key: HADOOP-14855 > URL: https://issues.apache.org/jira/browse/HADOOP-14855 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0-alpha4 >Reporter: Aaron T. Myers > > I encountered a case recently where the NN wouldn't start, with the error > message "namenode is running as process 16769. Stop it first." In fact the > NN was not running at all, but rather another long-running process was > running with this pid. > It looks to me like our scripts just check to see if _any_ process is running > with the pid that the NN (or any Hadoop daemon) most recently ran with. This > is clearly not a fool-proof way of checking to see if a particular type of > daemon is now running, as some other process could start running with the > same pid since the daemon in question was previously shut down. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14855) Hadoop scripts may errantly believe a daemon is still running, preventing it from starting
Aaron T. Myers created HADOOP-14855: --- Summary: Hadoop scripts may errantly believe a daemon is still running, preventing it from starting Key: HADOOP-14855 URL: https://issues.apache.org/jira/browse/HADOOP-14855 Project: Hadoop Common Issue Type: Bug Components: scripts Affects Versions: 3.0.0-alpha4 Reporter: Aaron T. Myers I encountered a case recently where the NN wouldn't start, with the error message "namenode is running as process 16769. Stop it first." In fact the NN was not running at all, but rather another long-running process was running with this pid. It looks to me like our scripts just check to see if _any_ process is running with the pid that the NN (or any Hadoop daemon) most recently ran with. This is clearly not a fool-proof way of checking to see if a particular type of daemon is now running, as some other process could start running with the same pid since the daemon in question was previously shut down. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14521) KMS client needs retry logic
[ https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159027#comment-16159027 ] Xiao Chen commented on HADOOP-14521: Thanks for the prompt response [~shahrs87]. bq. The previous behavior was just masking the bugs on the server side. True, and I agree the the server-side bugs should be fixed. However, as noted in the last comment, in practice the existing behavior allows a client request to succeed after retry. With this patch, clients will straightly fail on the first failure. This incompatible behavior is painful for the client, and is the main reason I raise the above. HADOOP-14445 and HADOOP-14841 are just examples for this kind of failures. Although they are nasty bugs, my biggest concern now is not specific to any of them. Rather, it's the behavior change that made a previously working client, doesn't work anymore. This sounds pretty pressing to me. I can think of a few ways to keep existing behavior, but with 3.0beta and 2.8.2 coming, I'm inclined to revert this for now, and re-commit after improvement. [~andrew.wang] / [~djp] FYI. > KMS client needs retry logic > > > Key: HADOOP-14521 > URL: https://issues.apache.org/jira/browse/HADOOP-14521 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Fix For: 2.9.0, 3.0.0-beta1, 2.8.2 > > Attachments: HADOOP-14521.09.patch, > HADOOP-14521-branch-2.8.002.patch, HADOOP-14521-branch-2.8.2.patch, > HADOOP-14521-trunk-10.patch, HDFS-11804-branch-2.8.patch, > HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, HDFS-11804-trunk-3.patch, > HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, HDFS-11804-trunk-6.patch, > HDFS-11804-trunk-7.patch, HDFS-11804-trunk-8.patch, HDFS-11804-trunk.patch > > > The kms client appears to have no retry logic – at all. It's completely > decoupled from the ipc retry logic. This has major impacts if the KMS is > unreachable for any reason, including but not limited to network connection > issues, timeouts, the +restart during an upgrade+. > This has some major ramifications: > # Jobs may fail to submit, although oozie resubmit logic should mask it > # Non-oozie launchers may experience higher rates if they do not already have > retry logic. > # Tasks reading EZ files will fail, probably be masked by framework reattempts > # EZ file creation fails after creating a 0-length file – client receives > EDEK in the create response, then fails when decrypting the EDEK > # Bulk hadoop fs copies, and maybe distcp, will prematurely fail -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14771) hadoop-client does not include hadoop-yarn-client
[ https://issues.apache.org/jira/browse/HADOOP-14771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159006#comment-16159006 ] Bharat Viswanadham commented on HADOOP-14771: - [~busbey] [~haibo.chen] As hadoop-yarn-client is transitive dependency for hadoop-mapreduce-client-core. But this dependency scope is at compile. so, this will pull in the jar. So, do you think still this fix is required? Let me know if I am missing something here. And also one more question, this hadoop-client is from hadoop 2.x it is not new right?. This jar will be used when we need all hadoop client jars and it's dependencies. > hadoop-client does not include hadoop-yarn-client > - > > Key: HADOOP-14771 > URL: https://issues.apache.org/jira/browse/HADOOP-14771 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Reporter: Haibo Chen >Assignee: Ajay Kumar >Priority: Blocker > Attachments: HADOOP-14771.01.patch > > > The hadoop-client does not include hadoop-yarn-client, thus, the shared > hadoop-client is incomplete. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14771) hadoop-client does not include hadoop-yarn-client
[ https://issues.apache.org/jira/browse/HADOOP-14771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159006#comment-16159006 ] Bharat Viswanadham edited comment on HADOOP-14771 at 9/8/17 5:51 PM: - [~busbey] [~haibo.chen] As hadoop-yarn-client is transitive dependency for hadoop-mapreduce-client-core. But this dependency scope is at compile. so, this will pull in the jar. So, do you think still this fix is required? Let me know if I am missing something here. And also one more question, this hadoop-client is from hadoop 2.x it is not new right?. This jar will be used when we need all hadoop client jars and it's dependencies. was (Author: bharatviswa): [~busbey] [~haibo.chen] As hadoop-yarn-client is transitive dependency for hadoop-mapreduce-client-core. But this dependency scope is at compile. so, this will pull in the jar. So, do you think still this fix is required? Let me know if I am missing something here. And also one more question, this hadoop-client is from hadoop 2.x it is not new right?. This jar will be used when we need all hadoop client jars and it's dependencies. > hadoop-client does not include hadoop-yarn-client > - > > Key: HADOOP-14771 > URL: https://issues.apache.org/jira/browse/HADOOP-14771 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Reporter: Haibo Chen >Assignee: Ajay Kumar >Priority: Blocker > Attachments: HADOOP-14771.01.patch > > > The hadoop-client does not include hadoop-yarn-client, thus, the shared > hadoop-client is incomplete. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13968) S3a FS to support "__magic" path for the special "unmaterialized" writes
[ https://issues.apache.org/jira/browse/HADOOP-13968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13968: Summary: S3a FS to support "__magic" path for the special "unmaterialized" writes (was: S3a FS to support ".temp_pending_put" path for pending put operations) > S3a FS to support "__magic" path for the special "unmaterialized" writes > > > Key: HADOOP-13968 > URL: https://issues.apache.org/jira/browse/HADOOP-13968 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran > > S3AFileSystem to add support for a special path, such as > {{.temp_pending_put/}} or similar, which, when used as the base of a path, > indicates that the file is actually to be saved to the parent dir, but only > via a delayed put commit operation. > At the same time, we may need blocks on some normal fileIO ops under these > dirs, especially rename and delete, as this would cause serious problems > including data loss and large bills for pending data. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14797) Update re2j version to 1.1
[ https://issues.apache.org/jira/browse/HADOOP-14797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14797: Attachment: HADOOP-14797.002.patch * Add update to LICENSE.txt > Update re2j version to 1.1 > -- > > Key: HADOOP-14797 > URL: https://issues.apache.org/jira/browse/HADOOP-14797 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14797.001.patch, HADOOP-14797.002.patch > > > Update the dependency > com.google.re2j:re2j:1.0 > to the latest (1.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14851) LambdaTestUtils.eventually() doesn't spin on Assertion failures
[ https://issues.apache.org/jira/browse/HADOOP-14851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14851: Attachment: HADOOP-14851-001.patch Patch 001 # Errors are caught alongside Exception # Except for VM errors: stack overflow, out of memory, etc: the things you can't recover from. # On timeout of eventually/await the caught exception/error is rethrown # to avoid changing the signature of await and eventually (needed to keep them invocable in a Callable<> clause), the Throwable is cast up to Exception/Error before raise # And {{TimeoutHandler}} has had its args/return value widened to Throwable. That would be incompatible, but its not in use anywhere in the code. Tests: lots > LambdaTestUtils.eventually() doesn't spin on Assertion failures > --- > > Key: HADOOP-14851 > URL: https://issues.apache.org/jira/browse/HADOOP-14851 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14851-001.patch > > > This is funny. The {{LsmbdaTestUtils.eventually()}} method, meant to spin > until a closure stops raising exceptions, doesn't catch {{Error}} and > subclasses, so doesn't fail on an {{Assert.assert()}} failure, which raises > an {{AssertionError}}. My bad :) > Example: > {code} > eventually(TIMEOUT, > () -> { > while (counter.incrementAndGet() < 5) { > assert false : "oops"; > } > }, > retryLogic); > {code} > Fix: catch Throwable, rethrow. Needs to add VirtualMachineError & subclasses > to the set of errors not to spin on (OOM, stack overflow, ...) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14851) LambdaTestUtils.eventually() doesn't spin on Assertion failures
[ https://issues.apache.org/jira/browse/HADOOP-14851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14851: Target Version/s: 3.0.0-beta1 Status: Patch Available (was: Open) > LambdaTestUtils.eventually() doesn't spin on Assertion failures > --- > > Key: HADOOP-14851 > URL: https://issues.apache.org/jira/browse/HADOOP-14851 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14851-001.patch > > > This is funny. The {{LsmbdaTestUtils.eventually()}} method, meant to spin > until a closure stops raising exceptions, doesn't catch {{Error}} and > subclasses, so doesn't fail on an {{Assert.assert()}} failure, which raises > an {{AssertionError}}. My bad :) > Example: > {code} > eventually(TIMEOUT, > () -> { > while (counter.incrementAndGet() < 5) { > assert false : "oops"; > } > }, > retryLogic); > {code} > Fix: catch Throwable, rethrow. Needs to add VirtualMachineError & subclasses > to the set of errors not to spin on (OOM, stack overflow, ...) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14849) some wrong spelling words update
[ https://issues.apache.org/jira/browse/HADOOP-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158979#comment-16158979 ] Hudson commented on HADOOP-14849: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12825 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12825/]) HADOOP-14849. some wrong spelling words update. Contributed by Chen (aengineer: rev c35510a465cbda72c08239bcb5537375478bec3a) * (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultipleOutputs.java * (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/java/org/apache/hadoop/mapred/nativetask/NativeRuntime.java * (edit) hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/QuasiMonteCarlo.java * (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java > some wrong spelling words update > > > Key: HADOOP-14849 > URL: https://issues.apache.org/jira/browse/HADOOP-14849 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Chen Hongfei >Assignee: Chen Hongfei >Priority: Trivial > Fix For: 3.1.0 > > Attachments: HADOOP-14849.001.patch > > > Wrong spelling "refered" should be updated to "referred"; > "writting" should be updated to "writing"; > "destory" should be updated to "destroy"; > "ture" should be updated to "true"; > "interupt" should be updated to "interrupt"; -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14850) Read HttpServer2 resources directly from the source tree (if exists)
[ https://issues.apache.org/jira/browse/HADOOP-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158978#comment-16158978 ] Hudson commented on HADOOP-14850: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12825 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12825/]) HADOOP-14850. Read HttpServer2 resources directly from the source tree (aengineer: rev e8278b02a45d16569fdebfd1ac36b2e648ad1e1e) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java > Read HttpServer2 resources directly from the source tree (if exists) > > > Key: HADOOP-14850 > URL: https://issues.apache.org/jira/browse/HADOOP-14850 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Elek, Marton >Assignee: Elek, Marton > Fix For: 3.1.0 > > Attachments: HADOOP-14850.001.patch, HADOOP-14850.002.patch, > HADOOP-14850.003.patch > > > Currently the Hadoop server components can't be started from IDE during the > development. There are two reasons for that: > 1. some artifacts are in provided scope which are definitelly needed to run > the server (see HDFS-12197) > 2. The src/main/webapp dir should be on the classpath (but not). > In this issue I suggest to fix the second issue by reading the web resources > (html and css files) directly from the source tree and not from the classpath > but ONLY if the src/main/webapp dir exists. Similar approach exists in > different projects (eg. in Spark). > WIth this patch the web development of the web interfaces are significant > easier as the result could be checked immediatelly with a running severt > (without rebuild/restart). I used this patch during the development of the > Ozone web interfaces. > As the original behaviour of the resource location has not been change if > "src/main/webapp" doesn't exist, I think it's quite safe. And the method is > called only once during the creation of the HttpServer2 there is also no > change in performance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14521) KMS client needs retry logic
[ https://issues.apache.org/jira/browse/HADOOP-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158976#comment-16158976 ] Rushabh S Shah commented on HADOOP-14521: - bq. As a result, this fix made HADOOP-14841 and HADOOP-14445 from 'wrong-but-works' kind of issues, become 'wrong-and-breaks' kind of issues. HADOOP-14445: I don't have enough cycles to work on this. I need to address backwards incompatible concerns on that one. I will have a patch in few days. HADOOP-14841: In my opinion, we should to try to add more instrumentation code to find out root cause instead of just blindly retrying and ignoring the bug. bq. either provide an addendum or do a follow-on jira, to keep existing behavior. The previous behavior was just masking the bugs on the server side. I don't see any pressing need to go back to old behavior. > KMS client needs retry logic > > > Key: HADOOP-14521 > URL: https://issues.apache.org/jira/browse/HADOOP-14521 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Fix For: 2.9.0, 3.0.0-beta1, 2.8.2 > > Attachments: HADOOP-14521.09.patch, > HADOOP-14521-branch-2.8.002.patch, HADOOP-14521-branch-2.8.2.patch, > HADOOP-14521-trunk-10.patch, HDFS-11804-branch-2.8.patch, > HDFS-11804-trunk-1.patch, HDFS-11804-trunk-2.patch, HDFS-11804-trunk-3.patch, > HDFS-11804-trunk-4.patch, HDFS-11804-trunk-5.patch, HDFS-11804-trunk-6.patch, > HDFS-11804-trunk-7.patch, HDFS-11804-trunk-8.patch, HDFS-11804-trunk.patch > > > The kms client appears to have no retry logic – at all. It's completely > decoupled from the ipc retry logic. This has major impacts if the KMS is > unreachable for any reason, including but not limited to network connection > issues, timeouts, the +restart during an upgrade+. > This has some major ramifications: > # Jobs may fail to submit, although oozie resubmit logic should mask it > # Non-oozie launchers may experience higher rates if they do not already have > retry logic. > # Tasks reading EZ files will fail, probably be masked by framework reattempts > # EZ file creation fails after creating a 0-length file – client receives > EDEK in the create response, then fails when decrypting the EDEK > # Bulk hadoop fs copies, and maybe distcp, will prematurely fail -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14849) some wrong spelling words update
[ https://issues.apache.org/jira/browse/HADOOP-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HADOOP-14849: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: (was: 3.0.0-alpha4) 3.1.0 Target Version/s: 3.1.0 Status: Resolved (was: Patch Available) Thank you for the contribution. I have committed this patch to the trunk. > some wrong spelling words update > > > Key: HADOOP-14849 > URL: https://issues.apache.org/jira/browse/HADOOP-14849 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Chen Hongfei >Assignee: Chen Hongfei >Priority: Trivial > Fix For: 3.1.0 > > Attachments: HADOOP-14849.001.patch > > > Wrong spelling "refered" should be updated to "referred"; > "writting" should be updated to "writing"; > "destory" should be updated to "destroy"; > "ture" should be updated to "true"; > "interupt" should be updated to "interrupt"; -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14850) Read HttpServer2 resources directly from the source tree (if exists)
[ https://issues.apache.org/jira/browse/HADOOP-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HADOOP-14850: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.0 Target Version/s: 3.1.0 (was: 3.0.0-beta1) Status: Resolved (was: Patch Available) Thank for the contribution. I have committed this patch to the trunk. > Read HttpServer2 resources directly from the source tree (if exists) > > > Key: HADOOP-14850 > URL: https://issues.apache.org/jira/browse/HADOOP-14850 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Elek, Marton >Assignee: Elek, Marton > Fix For: 3.1.0 > > Attachments: HADOOP-14850.001.patch, HADOOP-14850.002.patch, > HADOOP-14850.003.patch > > > Currently the Hadoop server components can't be started from IDE during the > development. There are two reasons for that: > 1. some artifacts are in provided scope which are definitelly needed to run > the server (see HDFS-12197) > 2. The src/main/webapp dir should be on the classpath (but not). > In this issue I suggest to fix the second issue by reading the web resources > (html and css files) directly from the source tree and not from the classpath > but ONLY if the src/main/webapp dir exists. Similar approach exists in > different projects (eg. in Spark). > WIth this patch the web development of the web interfaces are significant > easier as the result could be checked immediatelly with a running severt > (without rebuild/restart). I used this patch during the development of the > Ozone web interfaces. > As the original behaviour of the resource location has not been change if > "src/main/webapp" doesn't exist, I think it's quite safe. And the method is > called only once during the creation of the HttpServer2 there is also no > change in performance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14850) Read HttpServer2 resources directly from the source tree (if exists)
[ https://issues.apache.org/jira/browse/HADOOP-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158941#comment-16158941 ] Anu Engineer edited comment on HADOOP-14850 at 9/8/17 5:08 PM: --- Thank you for the contribution. I have committed this patch to the trunk. was (Author: anu): Thank for the contribution. I have committed this patch to the trunk. > Read HttpServer2 resources directly from the source tree (if exists) > > > Key: HADOOP-14850 > URL: https://issues.apache.org/jira/browse/HADOOP-14850 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Elek, Marton >Assignee: Elek, Marton > Fix For: 3.1.0 > > Attachments: HADOOP-14850.001.patch, HADOOP-14850.002.patch, > HADOOP-14850.003.patch > > > Currently the Hadoop server components can't be started from IDE during the > development. There are two reasons for that: > 1. some artifacts are in provided scope which are definitelly needed to run > the server (see HDFS-12197) > 2. The src/main/webapp dir should be on the classpath (but not). > In this issue I suggest to fix the second issue by reading the web resources > (html and css files) directly from the source tree and not from the classpath > but ONLY if the src/main/webapp dir exists. Similar approach exists in > different projects (eg. in Spark). > WIth this patch the web development of the web interfaces are significant > easier as the result could be checked immediatelly with a running severt > (without rebuild/restart). I used this patch during the development of the > Ozone web interfaces. > As the original behaviour of the resource location has not been change if > "src/main/webapp" doesn't exist, I think it's quite safe. And the method is > called only once during the creation of the HttpServer2 there is also no > change in performance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14853) hadoop-mapreduce-client-app is not a client module
[ https://issues.apache.org/jira/browse/HADOOP-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated HADOOP-14853: Status: Patch Available (was: Open) > hadoop-mapreduce-client-app is not a client module > -- > > Key: HADOOP-14853 > URL: https://issues.apache.org/jira/browse/HADOOP-14853 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: HADOOP-14853.00.patch > > > hadoop-mapreduce-client-app is not a client module, and thus can be removed > as a dependency from hadoop-client module -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14853) hadoop-mapreduce-client-app is not a client module
[ https://issues.apache.org/jira/browse/HADOOP-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated HADOOP-14853: Attachment: HADOOP-14853.00.patch > hadoop-mapreduce-client-app is not a client module > -- > > Key: HADOOP-14853 > URL: https://issues.apache.org/jira/browse/HADOOP-14853 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: HADOOP-14853.00.patch > > > hadoop-mapreduce-client-app is not a client module, and thus can be removed > as a dependency from hadoop-client module -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14849) some wrong spelling words update
[ https://issues.apache.org/jira/browse/HADOOP-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158918#comment-16158918 ] Anu Engineer edited comment on HADOOP-14849 at 9/8/17 4:57 PM: --- +1, I will commit this shortly. Thank you for fixing this. I have also added you to the contributors list so you can assign JIRAs to yourself. was (Author: anu): +1, I will commit this shortly. Thank you for fixing this. > some wrong spelling words update > > > Key: HADOOP-14849 > URL: https://issues.apache.org/jira/browse/HADOOP-14849 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Chen Hongfei >Assignee: Chen Hongfei >Priority: Trivial > Fix For: 3.0.0-alpha4 > > Attachments: HADOOP-14849.001.patch > > > Wrong spelling "refered" should be updated to "referred"; > "writting" should be updated to "writing"; > "destory" should be updated to "destroy"; > "ture" should be updated to "true"; > "interupt" should be updated to "interrupt"; -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14849) some wrong spelling words update
[ https://issues.apache.org/jira/browse/HADOOP-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer reassigned HADOOP-14849: - Assignee: Chen Hongfei > some wrong spelling words update > > > Key: HADOOP-14849 > URL: https://issues.apache.org/jira/browse/HADOOP-14849 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Chen Hongfei >Assignee: Chen Hongfei >Priority: Trivial > Fix For: 3.0.0-alpha4 > > Attachments: HADOOP-14849.001.patch > > > Wrong spelling "refered" should be updated to "referred"; > "writting" should be updated to "writing"; > "destory" should be updated to "destroy"; > "ture" should be updated to "true"; > "interupt" should be updated to "interrupt"; -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14854) DistCp should not issue file status calls for files in the filter list
Mukul Kumar Singh created HADOOP-14854: -- Summary: DistCp should not issue file status calls for files in the filter list Key: HADOOP-14854 URL: https://issues.apache.org/jira/browse/HADOOP-14854 Project: Hadoop Common Issue Type: Bug Reporter: Mukul Kumar Singh Assignee: Mukul Kumar Singh DistCp currently excludes the files in the filter list only when the files are added to the copy list. However distcp can be optimized by not issuing file status/get attr calls for the files in the filter. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14849) some wrong spelling words update
[ https://issues.apache.org/jira/browse/HADOOP-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158918#comment-16158918 ] Anu Engineer commented on HADOOP-14849: --- +1, I will commit this shortly. Thank you for fixing this. > some wrong spelling words update > > > Key: HADOOP-14849 > URL: https://issues.apache.org/jira/browse/HADOOP-14849 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Chen Hongfei >Priority: Trivial > Fix For: 3.0.0-alpha4 > > Attachments: HADOOP-14849.001.patch > > > Wrong spelling "refered" should be updated to "referred"; > "writting" should be updated to "writing"; > "destory" should be updated to "destroy"; > "ture" should be updated to "true"; > "interupt" should be updated to "interrupt"; -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14850) Read HttpServer2 resources directly from the source tree (if exists)
[ https://issues.apache.org/jira/browse/HADOOP-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158914#comment-16158914 ] Anu Engineer commented on HADOOP-14850: --- +1, I am willing to commit this to the trunk. I am not sure if this is needed in the beta1. [~elek] Please feel free to reopen this if you really want this in beta1 so we can discuss it. [~andrew.wang] Are you ok with this patch being committed to beta1, It will not have any impact per se, but it is not any functionality gain either. Is there any guidance on what should go to beta1? Any change which will not impact code or any code that is essential for beta1? Right now, I am holding the "essential" bar, since we want to release in Nov 2017. Please let me know if you think otherwise. > Read HttpServer2 resources directly from the source tree (if exists) > > > Key: HADOOP-14850 > URL: https://issues.apache.org/jira/browse/HADOOP-14850 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HADOOP-14850.001.patch, HADOOP-14850.002.patch, > HADOOP-14850.003.patch > > > Currently the Hadoop server components can't be started from IDE during the > development. There are two reasons for that: > 1. some artifacts are in provided scope which are definitelly needed to run > the server (see HDFS-12197) > 2. The src/main/webapp dir should be on the classpath (but not). > In this issue I suggest to fix the second issue by reading the web resources > (html and css files) directly from the source tree and not from the classpath > but ONLY if the src/main/webapp dir exists. Similar approach exists in > different projects (eg. in Spark). > WIth this patch the web development of the web interfaces are significant > easier as the result could be checked immediatelly with a running severt > (without rebuild/restart). I used this patch during the development of the > Ozone web interfaces. > As the original behaviour of the resource location has not been change if > "src/main/webapp" doesn't exist, I think it's quite safe. And the method is > called only once during the creation of the HttpServer2 there is also no > change in performance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14853) hadoop-mapreduce-client-app is not a client module
Haibo Chen created HADOOP-14853: --- Summary: hadoop-mapreduce-client-app is not a client module Key: HADOOP-14853 URL: https://issues.apache.org/jira/browse/HADOOP-14853 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0-alpha4 Reporter: Haibo Chen Assignee: Haibo Chen hadoop-mapreduce-client-app is not a client module, and thus can be removed as a dependency from hadoop-client module -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14850) Read HttpServer2 resources directly from the source tree (if exists)
[ https://issues.apache.org/jira/browse/HADOOP-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158853#comment-16158853 ] Hadoop QA commented on HADOOP-14850: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 33m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 25s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 52 unchanged - 2 fixed = 52 total (was 54) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 6s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 48s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}141m 32s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestShellBasedUnixGroupsMapping | | | hadoop.security.TestRaceWhenRelogin | | | hadoop.io.compress.TestCodec | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14850 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886080/HADOOP-14850.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux bf11a17f43e4 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 5bbca80 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13208/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13208/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13208/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Read HttpServer2 resources directly from the source tree (if exists) >
[jira] [Commented] (HADOOP-14771) hadoop-client does not include hadoop-yarn-client
[ https://issues.apache.org/jira/browse/HADOOP-14771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158843#comment-16158843 ] Sean Busbey commented on HADOOP-14771: -- I'd rather rely on direct dependencies to show intention. There's a ton of stuff that gets pulled in to the client artifacts transitively and much of it has required forensic work to see if it ought to be there. How are we testing that this works as intended? > hadoop-client does not include hadoop-yarn-client > - > > Key: HADOOP-14771 > URL: https://issues.apache.org/jira/browse/HADOOP-14771 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Reporter: Haibo Chen >Assignee: Ajay Kumar >Priority: Blocker > Attachments: HADOOP-14771.01.patch > > > The hadoop-client does not include hadoop-yarn-client, thus, the shared > hadoop-client is incomplete. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14771) hadoop-client does not include hadoop-yarn-client
[ https://issues.apache.org/jira/browse/HADOOP-14771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158820#comment-16158820 ] Haibo Chen commented on HADOOP-14771: - Thanks [~bharatviswa] for the info! In that case, this may not be a blocker but definitely still very important to fix, as we probably want to be explicit about this. What's your thoughts, [~busbey]? > hadoop-client does not include hadoop-yarn-client > - > > Key: HADOOP-14771 > URL: https://issues.apache.org/jira/browse/HADOOP-14771 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Reporter: Haibo Chen >Assignee: Ajay Kumar >Priority: Blocker > Attachments: HADOOP-14771.01.patch > > > The hadoop-client does not include hadoop-yarn-client, thus, the shared > hadoop-client is incomplete. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests
[ https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158798#comment-16158798 ] Hadoop QA commented on HADOOP-14220: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 8 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 15s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 18 new + 21 unchanged - 0 fixed = 39 total (was 21) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 52s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 18s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14220 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886092/HADOOP-14220-009.patch | | Optional Tests | asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle | | uname | Linux 6fed4253d518 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4a83170 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/13209/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13209/testReport/ | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13209/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Enhance S3GuardTool with bucket-info and set-capacity commands, tests > - > > Key: HADOOP-14220 > URL: https://issues.apache.org/jira/browse/HADOOP-14220 > Project: Hadoop Common > Issue Type:
[jira] [Commented] (HADOOP-14843) FsPermission symbolic parsing failed to detect invalid argument
[ https://issues.apache.org/jira/browse/HADOOP-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158788#comment-16158788 ] Jason Lowe commented on HADOOP-14843: - Thanks for updating the patch! Looks good overall, just a readability nit for these two lines: {code} assertEquals(950, new FsPermission("+rwrt").toShort()); [...] assertEquals(1023, new FsPermission("+rwxt").toShort()); {code} It would be easier to read using the octal notation for the integer constants as was done in the other toShort tests above this. > FsPermission symbolic parsing failed to detect invalid argument > --- > > Key: HADOOP-14843 > URL: https://issues.apache.org/jira/browse/HADOOP-14843 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.7.4, 2.8.1 >Reporter: Jason Lowe >Assignee: Bharat Viswanadham > Attachments: HADOOP-14843.01.patch, HADOOP-14843.02.patch, > HADOOP-14843.03.patch, HADOOP-14843.patch > > > A user misunderstood the syntax format for the FsPermission symbolic > constructor and passed the argument "-rwr" instead of "u=rw,g=r". In 2.7 and > earlier this was silently misinterpreted as mode 0777 and in 2.8 it oddly > became mode . In either case FsPermission should have flagged "-rwr" as > an invalid argument rather than silently misinterpreting it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14852) Intermittent failure of S3Guard TestConsistencyListFiles
[ https://issues.apache.org/jira/browse/HADOOP-14852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158737#comment-16158737 ] Steve Loughran commented on HADOOP-14852: - Saw this working on HADOOP-14220; change order of assertions there to give a bit more diags on failure than just list size difference {code} Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 117.338 sec <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency testConsistentListFiles(org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency) Time elapsed: 43.87 sec <<< FAILURE! java.lang.AssertionError: s3a://hwdev-steve-frankfurt-new/fork-3/test/doTestListFiles-2-2-1-false/file-2-DELAY_LISTING_ME should have been listed at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.verifyFileIsListed(ITestS3GuardListConsistency.java:455) at org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.doTestListFiles(ITestS3GuardListConsistency.java:438) at org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListFiles(ITestS3GuardListConsistency.java:359) {code} > Intermittent failure of S3Guard TestConsistencyListFiles > > > Key: HADOOP-14852 > URL: https://issues.apache.org/jira/browse/HADOOP-14852 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran > > I'm seeing intermittent test failures with a test run of {{ -Dparallel-tests > -DtestsThreadCount=8 -Ds3guard -Ddynamo}} (-Dauth set or unset) in which a > file in DELAY-LISTING-ME isn't being returned in a listing. > Theories > * test is wrong > * config is wrong > * code is wrong -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14852) Intermittent failure of S3Guard TestConsistencyListFiles
Steve Loughran created HADOOP-14852: --- Summary: Intermittent failure of S3Guard TestConsistencyListFiles Key: HADOOP-14852 URL: https://issues.apache.org/jira/browse/HADOOP-14852 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3, test Affects Versions: 3.0.0-beta1 Reporter: Steve Loughran I'm seeing intermittent test failures with a test run of {{ -Dparallel-tests -DtestsThreadCount=8 -Ds3guard -Ddynamo}} (-Dauth set or unset) in which a file in DELAY-LISTING-ME isn't being returned in a listing. Theories * test is wrong * config is wrong * code is wrong -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests
[ https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14220: Status: Patch Available (was: Open) > Enhance S3GuardTool with bucket-info and set-capacity commands, tests > - > > Key: HADOOP-14220 > URL: https://issues.apache.org/jira/browse/HADOOP-14220 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14220-006.patch, HADOOP-14220-008.patch, > HADOOP-14220-009.patch, HADOOP-14220-HADOOP-13345-001.patch, > HADOOP-14220-HADOOP-13345-002.patch, HADOOP-14220-HADOOP-13345-003.patch, > HADOOP-14220-HADOOP-13345-004.patch, HADOOP-14220-HADOOP-13345-005.patch > > > Add a diagnostics command to s3guard which does whatever we need to diagnose > problems for a specific (named) s3a url. This is something which can be > attached to bug reports as well as used by developers. > * Properties to log (with provenance attribute, which can track bucket > overrides: s3guard metastore setup, autocreate, capacity, > * table present/absent > * # of keys in DDB table for that bucket? > * any other stats? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests
[ https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14220: Attachment: HADOOP-14220-009.patch HADOOP-14220: patch 009 * javadoc warnings * findbugs: Gave up on fixing, so excluding the line * found a race condition in the assertion that after changing the capacity of a table it -> UPDATING state. This held for DDB, but not always for local dynamo. The fix involved using eventually() to wrap the assertions, discovered HADOOP-14851 Tested: s3 frankfurt, with local and dynamo db, auth/non-auth. As noted: a race condition surfaced, is now fixed. > Enhance S3GuardTool with bucket-info and set-capacity commands, tests > - > > Key: HADOOP-14220 > URL: https://issues.apache.org/jira/browse/HADOOP-14220 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14220-006.patch, HADOOP-14220-008.patch, > HADOOP-14220-009.patch, HADOOP-14220-HADOOP-13345-001.patch, > HADOOP-14220-HADOOP-13345-002.patch, HADOOP-14220-HADOOP-13345-003.patch, > HADOOP-14220-HADOOP-13345-004.patch, HADOOP-14220-HADOOP-13345-005.patch > > > Add a diagnostics command to s3guard which does whatever we need to diagnose > problems for a specific (named) s3a url. This is something which can be > attached to bug reports as well as used by developers. > * Properties to log (with provenance attribute, which can track bucket > overrides: s3guard metastore setup, autocreate, capacity, > * table present/absent > * # of keys in DDB table for that bucket? > * any other stats? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests
[ https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14220: Status: Open (was: Patch Available) > Enhance S3GuardTool with bucket-info and set-capacity commands, tests > - > > Key: HADOOP-14220 > URL: https://issues.apache.org/jira/browse/HADOOP-14220 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14220-006.patch, HADOOP-14220-008.patch, > HADOOP-14220-HADOOP-13345-001.patch, HADOOP-14220-HADOOP-13345-002.patch, > HADOOP-14220-HADOOP-13345-003.patch, HADOOP-14220-HADOOP-13345-004.patch, > HADOOP-14220-HADOOP-13345-005.patch > > > Add a diagnostics command to s3guard which does whatever we need to diagnose > problems for a specific (named) s3a url. This is something which can be > attached to bug reports as well as used by developers. > * Properties to log (with provenance attribute, which can track bucket > overrides: s3guard metastore setup, autocreate, capacity, > * table present/absent > * # of keys in DDB table for that bucket? > * any other stats? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-9383) mvn clean compile fails without install goal
[ https://issues.apache.org/jira/browse/HADOOP-9383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Payne reopened HADOOP-9383: Something similar is definitely still happening on fedora and redhat. This is what I'm getting. {noformat} Could not find artifact org.apache.hadoop:hadoop-maven-plugins:jar:3.1.0-SNAPSHOT {noformat} I'm reopening the JIRA. > mvn clean compile fails without install goal > > > Key: HADOOP-9383 > URL: https://issues.apache.org/jira/browse/HADOOP-9383 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha1 >Reporter: Arpit Agarwal > > 'mvn -Pnative-win clean compile' fails with the following error: > [ERROR] Could not find goal 'protoc' in plugin > org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT among available goals > -> [Help 1] > The build succeeds if the install goal is specified. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14850) Read HttpServer2 resources directly from the source tree (if exists)
[ https://issues.apache.org/jira/browse/HADOOP-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HADOOP-14850: -- Attachment: HADOOP-14850.003.patch Fix the resource usage of the unit tests as well. > Read HttpServer2 resources directly from the source tree (if exists) > > > Key: HADOOP-14850 > URL: https://issues.apache.org/jira/browse/HADOOP-14850 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HADOOP-14850.001.patch, HADOOP-14850.002.patch, > HADOOP-14850.003.patch > > > Currently the Hadoop server components can't be started from IDE during the > development. There are two reasons for that: > 1. some artifacts are in provided scope which are definitelly needed to run > the server (see HDFS-12197) > 2. The src/main/webapp dir should be on the classpath (but not). > In this issue I suggest to fix the second issue by reading the web resources > (html and css files) directly from the source tree and not from the classpath > but ONLY if the src/main/webapp dir exists. Similar approach exists in > different projects (eg. in Spark). > WIth this patch the web development of the web interfaces are significant > easier as the result could be checked immediatelly with a running severt > (without rebuild/restart). I used this patch during the development of the > Ozone web interfaces. > As the original behaviour of the resource location has not been change if > "src/main/webapp" doesn't exist, I think it's quite safe. And the method is > called only once during the creation of the HttpServer2 there is also no > change in performance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14850) Read HttpServer2 resources directly from the source tree (if exists)
[ https://issues.apache.org/jira/browse/HADOOP-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158638#comment-16158638 ] Elek, Marton edited comment on HADOOP-14850 at 9/8/17 1:32 PM: --- Fixing the resource usage of the unit tests as well. was (Author: elek): Fix the resource usage of the unit tests as well. > Read HttpServer2 resources directly from the source tree (if exists) > > > Key: HADOOP-14850 > URL: https://issues.apache.org/jira/browse/HADOOP-14850 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HADOOP-14850.001.patch, HADOOP-14850.002.patch, > HADOOP-14850.003.patch > > > Currently the Hadoop server components can't be started from IDE during the > development. There are two reasons for that: > 1. some artifacts are in provided scope which are definitelly needed to run > the server (see HDFS-12197) > 2. The src/main/webapp dir should be on the classpath (but not). > In this issue I suggest to fix the second issue by reading the web resources > (html and css files) directly from the source tree and not from the classpath > but ONLY if the src/main/webapp dir exists. Similar approach exists in > different projects (eg. in Spark). > WIth this patch the web development of the web interfaces are significant > easier as the result could be checked immediatelly with a running severt > (without rebuild/restart). I used this patch during the development of the > Ozone web interfaces. > As the original behaviour of the resource location has not been change if > "src/main/webapp" doesn't exist, I think it's quite safe. And the method is > called only once during the creation of the HttpServer2 there is also no > change in performance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-8391) Hadoop-auth should use log4j
[ https://issues.apache.org/jira/browse/HADOOP-8391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor resolved HADOOP-8391. -- Resolution: Won't Fix I agree with Steve. Hadoop modules are moving toward SLF4J. > Hadoop-auth should use log4j > > > Key: HADOOP-8391 > URL: https://issues.apache.org/jira/browse/HADOOP-8391 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins > > Per HADOOP-8086 hadoop-auth uses slf4j, don't see why it shouldn't use log4j > to be consistent with the rest of Hadoop. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-9383) mvn clean compile fails without install goal
[ https://issues.apache.org/jira/browse/HADOOP-9383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor resolved HADOOP-9383. -- Resolution: Cannot Reproduce > mvn clean compile fails without install goal > > > Key: HADOOP-9383 > URL: https://issues.apache.org/jira/browse/HADOOP-9383 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha1 >Reporter: Arpit Agarwal > > 'mvn -Pnative-win clean compile' fails with the following error: > [ERROR] Could not find goal 'protoc' in plugin > org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT among available goals > -> [Help 1] > The build succeeds if the install goal is specified. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14850) Read HttpServer2 resources directly from the source tree (if exists)
[ https://issues.apache.org/jira/browse/HADOOP-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158500#comment-16158500 ] Hadoop QA commented on HADOOP-14850: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 52 unchanged - 2 fixed = 52 total (was 54) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 40s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 47s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 66m 4s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-common-project/hadoop-common | | | Redundant nullcheck of resourceUrl, which is known to be non-null in org.apache.hadoop.http.HttpServer2.getWebAppsPath(String) Redundant null check at HttpServer2.java:is known to be non-null in org.apache.hadoop.http.HttpServer2.getWebAppsPath(String) Redundant null check at HttpServer2.java:[line 1012] | | Failed junit tests | hadoop.http.TestGlobalFilter | | | hadoop.http.TestPathFilter | | | hadoop.http.TestHttpServer | | | hadoop.fs.sftp.TestSFTPFileSystem | | | hadoop.http.TestSSLHttpServer | | | hadoop.http.TestServletFilter | | | hadoop.http.TestHttpServerLogs | | | hadoop.http.TestAuthenticationSessionCookie | | | hadoop.security.TestKDiag | | | hadoop.http.TestHttpServerWebapps | | | hadoop.http.TestHttpServerWithSpengo | | | hadoop.http.TestHttpCookieFlag | | | hadoop.http.TestHttpServerLifecycle | | | hadoop.jmx.TestJMXJsonServlet | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14850 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886038/HADOOP-14850.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 986b878e7838 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | |
[jira] [Commented] (HADOOP-13421) Switch to v2 of the S3 List Objects API in S3A
[ https://issues.apache.org/jira/browse/HADOOP-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158488#comment-16158488 ] Hudson commented on HADOOP-13421: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12822 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12822/]) HADOOP-13421. Switch to v2 of the S3 List Objects API in S3A. (stevel: rev 5bbca80428ffbe776650652de86a3bba885edb31) * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java * (add) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ListResult.java * (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md * (add) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ListRequest.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AGetFileStatus.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardListConsistency.java * (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Listing.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AContractGetFileStatusV1List.java > Switch to v2 of the S3 List Objects API in S3A > -- > > Key: HADOOP-13421 > URL: https://issues.apache.org/jira/browse/HADOOP-13421 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steven K. Wong >Assignee: Aaron Fabbri >Priority: Minor > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-13421.002.patch, HADOOP-13421.003.patch, > HADOOP-13421.004.patch, HADOOP-13421-HADOOP-13345.001.patch > > > Unlike [version > 1|http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html] of the > S3 List Objects API, [version > 2|http://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBucketGET.html] by > default does not fetch object owner information, which S3A doesn't need > anyway. By switching to v2, there will be less data to transfer/process. > Also, it should be more robust when listing a versioned bucket with "a large > number of delete markers" ([according to > AWS|https://aws.amazon.com/releasenotes/Java/0735652458007581]). > Methods in S3AFileSystem that use this API include: > * getFileStatus(Path) > * innerDelete(Path, boolean) > * innerListStatus(Path) > * innerRename(Path, Path) > Requires AWS SDK 1.10.75 or later. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14851) LambdaTestUtils.eventually() doesn't spin on Assertion failures
Steve Loughran created HADOOP-14851: --- Summary: LambdaTestUtils.eventually() doesn't spin on Assertion failures Key: HADOOP-14851 URL: https://issues.apache.org/jira/browse/HADOOP-14851 Project: Hadoop Common Issue Type: Bug Components: test Affects Versions: 2.8.1 Reporter: Steve Loughran Assignee: Steve Loughran This is funny. The {{LsmbdaTestUtils.eventually()}} method, meant to spin until a closure stops raising exceptions, doesn't catch {{Error}} and subclasses, so doesn't fail on an {{Assert.assert()}} failure, which raises an {{AssertionError}}. My bad :) Example: {code} eventually(TIMEOUT, () -> { while (counter.incrementAndGet() < 5) { assert false : "oops"; } }, retryLogic); {code} Fix: catch Throwable, rethrow. Needs to add VirtualMachineError & subclasses to the set of errors not to spin on (OOM, stack overflow, ...) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org