[jira] [Updated] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception
[ https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13590: --- Attachment: HADOOP-13590.08.patch Thanks [~andrew.wang] for looking at this! Patch 8 should address all the comments: bq. unit test Now TestUGI tests the just the retry logic, and the new {{TestUGIWithMiniKdc}} tests the 'retries at all'. bq. Exponential back-off Crossing your comment and [~ste...@apache.org]'s comment about using {{RetryPolicy}}, I changed the code for retry-time calculation. Now it first calculates how many max retries could possibly be needed, then creates a {{ExponentialBackoffRetry}} object and delegates the calculation to it. This way we achieve the random interval + code reuse. The UGI code is (I think) harder to read though. > Retry until TGT expires even if the UGI renewal thread encountered exception > > > Key: HADOOP-13590 > URL: https://issues.apache.org/jira/browse/HADOOP-13590 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.8.0, 2.7.3, 2.6.4 >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, > HADOOP-13590.03.patch, HADOOP-13590.04.patch, HADOOP-13590.05.patch, > HADOOP-13590.06.patch, HADOOP-13590.07.patch, HADOOP-13590.08.patch > > > The UGI has a background thread to renew the tgt. On exception, it > [terminates > itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014] > If something temporarily goes wrong that results in an IOE, even if it > recovered no renewal will be done and client will eventually fail to > authenticate. We should retry with our best effort, until tgt expires, in the > hope that the error recovers before that. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13777) Trim configuration values in `rumen`
[ https://issues.apache.org/jira/browse/HADOOP-13777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15624429#comment-15624429 ] Hadoop QA commented on HADOOP-13777: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s{color} | {color:green} hadoop-rumen in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 40s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13777 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12836282/HADOOP-13777..patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 80f14278df91 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7ba74be | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10941/testReport/ | | modules | C: hadoop-tools/hadoop-rumen U: hadoop-tools/hadoop-rumen | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10941/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Trim configuration values in `rumen` > > > Key: HADOOP-13777 > URL: https://issues.apache.org/jira/browse/HADOOP-13777 > Project: Hadoop Common > Issue Type: Bug > Components: tools >Affects Versions: 3.0.0-alpha1 >Reporter: Tianyin Xu >Priority: Minor > Attachments: HADOOP-13777..patch > > > The current implementation of {{ClassName.java}} in {{rumen}} does not
[jira] [Issue Comment Deleted] (HADOOP-13777) Trim configuration values in `rumen`
[ https://issues.apache.org/jira/browse/HADOOP-13777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tianyin Xu updated HADOOP-13777: Comment: was deleted (was: patch against trunk) > Trim configuration values in `rumen` > > > Key: HADOOP-13777 > URL: https://issues.apache.org/jira/browse/HADOOP-13777 > Project: Hadoop Common > Issue Type: Bug > Components: tools >Affects Versions: 3.0.0-alpha1 >Reporter: Tianyin Xu >Priority: Minor > Attachments: HADOOP-13777..patch > > > The current implementation of {{ClassName.java}} in {{rumen}} does not follow > the practice of trimming configuration values. This leads to silent and > hard-to-diagnosis errors if users set values containing space or > newline---basically classes supposed to need anonymization will not do. > See the previous commits as reference (just list a few): > HADOOP-6578. Configuration should trim whitespace around a lot of value types > HADOOP-6534. Trim whitespace from directory lists initializing > Patch is available against trunk > HDFS-9708. FSNamesystem.initAuditLoggers() doesn't trim classnames > HDFS-2799. Trim fs.checkpoint.dir values. > YARN-3395. FairScheduler: Trim whitespaces when using username for queuename. > YARN-2869. CapacityScheduler should trim sub queue names when parse > configuration. > Patch is available against trunk (tested): > {code:title=ClassName.java|borderStyle=solid} > @@ -43,15 +43,13 @@ protected String getPrefix() { >@Override >protected boolean needsAnonymization(Configuration conf) { > -String[] preserves = conf.getStrings(CLASSNAME_PRESERVE_CONFIG); > -if (preserves != null) { > - // do a simple starts with check > - for (String p : preserves) { > -if (className.startsWith(p)) { > - return false; > -} > +String[] preserves = conf.getTrimmedStrings(CLASSNAME_PRESERVE_CONFIG); > +// do a simple starts with check > +for (String p : preserves) { > + if (className.startsWith(p)) { > +return false; >} > } > return true; >} > {code} > (the NULL check is no longer needed because {{getTrimmedStrings}} returns an > empty array if nothing is set) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13777) Trim configuration values in `rumen`
[ https://issues.apache.org/jira/browse/HADOOP-13777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tianyin Xu updated HADOOP-13777: Attachment: (was: HADOOP-13777.patch) > Trim configuration values in `rumen` > > > Key: HADOOP-13777 > URL: https://issues.apache.org/jira/browse/HADOOP-13777 > Project: Hadoop Common > Issue Type: Bug > Components: tools >Affects Versions: 3.0.0-alpha1 >Reporter: Tianyin Xu >Priority: Minor > Attachments: HADOOP-13777..patch > > > The current implementation of {{ClassName.java}} in {{rumen}} does not follow > the practice of trimming configuration values. This leads to silent and > hard-to-diagnosis errors if users set values containing space or > newline---basically classes supposed to need anonymization will not do. > See the previous commits as reference (just list a few): > HADOOP-6578. Configuration should trim whitespace around a lot of value types > HADOOP-6534. Trim whitespace from directory lists initializing > Patch is available against trunk > HDFS-9708. FSNamesystem.initAuditLoggers() doesn't trim classnames > HDFS-2799. Trim fs.checkpoint.dir values. > YARN-3395. FairScheduler: Trim whitespaces when using username for queuename. > YARN-2869. CapacityScheduler should trim sub queue names when parse > configuration. > Patch is available against trunk (tested): > {code:title=ClassName.java|borderStyle=solid} > @@ -43,15 +43,13 @@ protected String getPrefix() { >@Override >protected boolean needsAnonymization(Configuration conf) { > -String[] preserves = conf.getStrings(CLASSNAME_PRESERVE_CONFIG); > -if (preserves != null) { > - // do a simple starts with check > - for (String p : preserves) { > -if (className.startsWith(p)) { > - return false; > -} > +String[] preserves = conf.getTrimmedStrings(CLASSNAME_PRESERVE_CONFIG); > +// do a simple starts with check > +for (String p : preserves) { > + if (className.startsWith(p)) { > +return false; >} > } > return true; >} > {code} > (the NULL check is no longer needed because {{getTrimmedStrings}} returns an > empty array if nothing is set) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13777) Trim configuration values in `rumen`
[ https://issues.apache.org/jira/browse/HADOOP-13777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tianyin Xu updated HADOOP-13777: Attachment: HADOOP-13777..patch patch against trunk > Trim configuration values in `rumen` > > > Key: HADOOP-13777 > URL: https://issues.apache.org/jira/browse/HADOOP-13777 > Project: Hadoop Common > Issue Type: Bug > Components: tools >Affects Versions: 3.0.0-alpha1 >Reporter: Tianyin Xu >Priority: Minor > Attachments: HADOOP-13777..patch > > > The current implementation of {{ClassName.java}} in {{rumen}} does not follow > the practice of trimming configuration values. This leads to silent and > hard-to-diagnosis errors if users set values containing space or > newline---basically classes supposed to need anonymization will not do. > See the previous commits as reference (just list a few): > HADOOP-6578. Configuration should trim whitespace around a lot of value types > HADOOP-6534. Trim whitespace from directory lists initializing > Patch is available against trunk > HDFS-9708. FSNamesystem.initAuditLoggers() doesn't trim classnames > HDFS-2799. Trim fs.checkpoint.dir values. > YARN-3395. FairScheduler: Trim whitespaces when using username for queuename. > YARN-2869. CapacityScheduler should trim sub queue names when parse > configuration. > Patch is available against trunk (tested): > {code:title=ClassName.java|borderStyle=solid} > @@ -43,15 +43,13 @@ protected String getPrefix() { >@Override >protected boolean needsAnonymization(Configuration conf) { > -String[] preserves = conf.getStrings(CLASSNAME_PRESERVE_CONFIG); > -if (preserves != null) { > - // do a simple starts with check > - for (String p : preserves) { > -if (className.startsWith(p)) { > - return false; > -} > +String[] preserves = conf.getTrimmedStrings(CLASSNAME_PRESERVE_CONFIG); > +// do a simple starts with check > +for (String p : preserves) { > + if (className.startsWith(p)) { > +return false; >} > } > return true; >} > {code} > (the NULL check is no longer needed because {{getTrimmedStrings}} returns an > empty array if nothing is set) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13777) Trim configuration values in `rumen`
[ https://issues.apache.org/jira/browse/HADOOP-13777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tianyin Xu updated HADOOP-13777: Status: Patch Available (was: Open) > Trim configuration values in `rumen` > > > Key: HADOOP-13777 > URL: https://issues.apache.org/jira/browse/HADOOP-13777 > Project: Hadoop Common > Issue Type: Bug > Components: tools >Affects Versions: 3.0.0-alpha1 >Reporter: Tianyin Xu >Priority: Minor > Attachments: HADOOP-13777.patch > > > The current implementation of {{ClassName.java}} in {{rumen}} does not follow > the practice of trimming configuration values. This leads to silent and > hard-to-diagnosis errors if users set values containing space or > newline---basically classes supposed to need anonymization will not do. > See the previous commits as reference (just list a few): > HADOOP-6578. Configuration should trim whitespace around a lot of value types > HADOOP-6534. Trim whitespace from directory lists initializing > Patch is available against trunk > HDFS-9708. FSNamesystem.initAuditLoggers() doesn't trim classnames > HDFS-2799. Trim fs.checkpoint.dir values. > YARN-3395. FairScheduler: Trim whitespaces when using username for queuename. > YARN-2869. CapacityScheduler should trim sub queue names when parse > configuration. > Patch is available against trunk (tested): > {code:title=ClassName.java|borderStyle=solid} > @@ -43,15 +43,13 @@ protected String getPrefix() { >@Override >protected boolean needsAnonymization(Configuration conf) { > -String[] preserves = conf.getStrings(CLASSNAME_PRESERVE_CONFIG); > -if (preserves != null) { > - // do a simple starts with check > - for (String p : preserves) { > -if (className.startsWith(p)) { > - return false; > -} > +String[] preserves = conf.getTrimmedStrings(CLASSNAME_PRESERVE_CONFIG); > +// do a simple starts with check > +for (String p : preserves) { > + if (className.startsWith(p)) { > +return false; >} > } > return true; >} > {code} > (the NULL check is no longer needed because {{getTrimmedStrings}} returns an > empty array if nothing is set) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13777) Trim configuration values in `rumen`
[ https://issues.apache.org/jira/browse/HADOOP-13777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tianyin Xu updated HADOOP-13777: Attachment: HADOOP-13777.patch patch against trunk > Trim configuration values in `rumen` > > > Key: HADOOP-13777 > URL: https://issues.apache.org/jira/browse/HADOOP-13777 > Project: Hadoop Common > Issue Type: Bug > Components: tools >Affects Versions: 3.0.0-alpha1 >Reporter: Tianyin Xu >Priority: Minor > Attachments: HADOOP-13777.patch > > > The current implementation of {{ClassName.java}} in {{rumen}} does not follow > the practice of trimming configuration values. This leads to silent and > hard-to-diagnosis errors if users set values containing space or > newline---basically classes supposed to need anonymization will not do. > See the previous commits as reference (just list a few): > HADOOP-6578. Configuration should trim whitespace around a lot of value types > HADOOP-6534. Trim whitespace from directory lists initializing > Patch is available against trunk > HDFS-9708. FSNamesystem.initAuditLoggers() doesn't trim classnames > HDFS-2799. Trim fs.checkpoint.dir values. > YARN-3395. FairScheduler: Trim whitespaces when using username for queuename. > YARN-2869. CapacityScheduler should trim sub queue names when parse > configuration. > Patch is available against trunk (tested): > {code:title=ClassName.java|borderStyle=solid} > @@ -43,15 +43,13 @@ protected String getPrefix() { >@Override >protected boolean needsAnonymization(Configuration conf) { > -String[] preserves = conf.getStrings(CLASSNAME_PRESERVE_CONFIG); > -if (preserves != null) { > - // do a simple starts with check > - for (String p : preserves) { > -if (className.startsWith(p)) { > - return false; > -} > +String[] preserves = conf.getTrimmedStrings(CLASSNAME_PRESERVE_CONFIG); > +// do a simple starts with check > +for (String p : preserves) { > + if (className.startsWith(p)) { > +return false; >} > } > return true; >} > {code} > (the NULL check is no longer needed because {{getTrimmedStrings}} returns an > empty array if nothing is set) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13777) Trim configuration values in `rumen`
Tianyin Xu created HADOOP-13777: --- Summary: Trim configuration values in `rumen` Key: HADOOP-13777 URL: https://issues.apache.org/jira/browse/HADOOP-13777 Project: Hadoop Common Issue Type: Bug Components: tools Affects Versions: 3.0.0-alpha1 Reporter: Tianyin Xu Priority: Minor The current implementation of {{ClassName.java}} in {{rumen}} does not follow the practice of trimming configuration values. This leads to silent and hard-to-diagnosis errors if users set values containing space or newline---basically classes supposed to need anonymization will not do. See the previous commits as reference (just list a few): HADOOP-6578. Configuration should trim whitespace around a lot of value types HADOOP-6534. Trim whitespace from directory lists initializing Patch is available against trunk HDFS-9708. FSNamesystem.initAuditLoggers() doesn't trim classnames HDFS-2799. Trim fs.checkpoint.dir values. YARN-3395. FairScheduler: Trim whitespaces when using username for queuename. YARN-2869. CapacityScheduler should trim sub queue names when parse configuration. Patch is available against trunk (tested): {code:title=ClassName.java|borderStyle=solid} @@ -43,15 +43,13 @@ protected String getPrefix() { @Override protected boolean needsAnonymization(Configuration conf) { -String[] preserves = conf.getStrings(CLASSNAME_PRESERVE_CONFIG); -if (preserves != null) { - // do a simple starts with check - for (String p : preserves) { -if (className.startsWith(p)) { - return false; -} +String[] preserves = conf.getTrimmedStrings(CLASSNAME_PRESERVE_CONFIG); +// do a simple starts with check +for (String p : preserves) { + if (className.startsWith(p)) { +return false; } } return true; } {code} (the NULL check is no longer needed because {{getTrimmedStrings}} returns an empty array if nothing is set) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15624384#comment-15624384 ] Ravi Prakash commented on HADOOP-13773: --- Hi Fei Hui! Welcome to the community and thanks for your contribution! I've taken the liberty to edit the JIRA with values for fields that we are used to. (Fix Version is set when the patch is merged. Target Version is set to the next expected release that would contain the fix. Description contains the problem) https://wiki.apache.org/hadoop/HowToContribute is a fairly verbose guide on how to contribute. Instead of making you read it in its entirety, I'd suggest uploading a patch file instead of github pull requests (because what happens to review discussions if Github.com was to fold tomorrow?) Just FYI, Allen rewrote the shell scripts for trunk (https://issues.apache.org/jira/browse/HADOOP-9902) and he's the reigning expert in that area. I'll defer to his better judgement. Unfortunately those improvements were not backported fully into branch-2 (to explain any discrepancy you may be seeing) It seems in trunk the discrepancy of multiple Xmx values is hired much more elegantly with [hadoop_add_param|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh#L830] . However to me your fix makes sense for branch-2. We shouldn't be appending {{-Xmx=512m}} indiscriminately. Could you please upload the patch file and I'll be happy to commit. > wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2 > --- > > Key: HADOOP-13773 > URL: https://issues.apache.org/jira/browse/HADOOP-13773 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 2.6.1, 2.7.3 >Reporter: Fei Hui > > in conf/hadoop-env.sh, > export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS" > when i set HADOOP_HEAPSIZE ,and run 'hadoop jar ...', jvm args is not work. > i see, in bin/hadoop, > exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@" > HADOOP_OPTS is behind JAVA_HEAP_MAX, so HADOOP_HEAPSIZE is not work. > for example i run 'HADOOP_HEAPSIZE=1024 hadoop jar ...' , the java process is > 'java -Xmx1024m ... -Xmx512m...', then Xmx512m is valid, and Xmx1024m is > invalid -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Prakash updated HADOOP-13773: -- Target Version/s: 2.8.0 (was: 2.7.3) > wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2 > --- > > Key: HADOOP-13773 > URL: https://issues.apache.org/jira/browse/HADOOP-13773 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 2.6.1, 2.7.3 >Reporter: Fei Hui > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Prakash updated HADOOP-13773: -- Description: in conf/hadoop-env.sh, export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS" when i set HADOOP_HEAPSIZE ,and run 'hadoop jar ...', jvm args is not work. i see, in bin/hadoop, exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@" HADOOP_OPTS is behind JAVA_HEAP_MAX, so HADOOP_HEAPSIZE is not work. for example i run 'HADOOP_HEAPSIZE=1024 hadoop jar ...' , the java process is 'java -Xmx1024m ... -Xmx512m...', then Xmx512m is valid, and Xmx1024m is invalid > wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2 > --- > > Key: HADOOP-13773 > URL: https://issues.apache.org/jira/browse/HADOOP-13773 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 2.6.1, 2.7.3 >Reporter: Fei Hui > > in conf/hadoop-env.sh, > export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS" > when i set HADOOP_HEAPSIZE ,and run 'hadoop jar ...', jvm args is not work. > i see, in bin/hadoop, > exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@" > HADOOP_OPTS is behind JAVA_HEAP_MAX, so HADOOP_HEAPSIZE is not work. > for example i run 'HADOOP_HEAPSIZE=1024 hadoop jar ...' , the java process is > 'java -Xmx1024m ... -Xmx512m...', then Xmx512m is valid, and Xmx1024m is > invalid -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Prakash updated HADOOP-13773: -- Fix Version/s: (was: 2.7.4) (was: 2.9.0) (was: 2.8.0) > wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2 > --- > > Key: HADOOP-13773 > URL: https://issues.apache.org/jira/browse/HADOOP-13773 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 2.6.1, 2.7.3 >Reporter: Fei Hui > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13776) remove redundant classpath entries in RunJar
[ https://issues.apache.org/jira/browse/HADOOP-13776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15624248#comment-15624248 ] Hadoop QA commented on HADOOP-13776: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 11s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 45s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13776 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12836240/HADOOP-13776.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux b00d751498df 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7ba74be | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10940/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10940/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > remove redundant classpath entries in RunJar > > > Key: HADOOP-13776 > URL: https://issues.apache.org/jira/browse/HADOOP-13776 > Project: Hadoop Common > Issue Type: Bug > Components: util >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Attachments: HADOOP-13776.01.patch > > > Today when you run a "hadoop jar" command, the content of the jar gets
[jira] [Commented] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15624188#comment-15624188 ] Aaron Fabbri commented on HADOOP-13651: --- Following up your security comments, [~ste...@apache.org]. To make sure I'm understanding, is it correct to say that: - S3A FileSystem authorization is delegated to the AWS S3 SDK client. - S3A code does not check hadoop user permissions, nor map hadoop users to AWS credentials. - So authorization is not "per user" in the hadoop sense, but "per configuration" as that is where S3A credentials / instance roles / etc. are defined. - If a user tries to open a s3a:// FileSystem and they do not supply/config proper AWS credentials, S3AFileSystem.initialize() will throw an exception in verifyBucketExists() -> s3.doesBucketExist() - It should be sufficient to only allow MetadataStore read/write operations after success of S3 read/write operation (respectively). Questions: - If a user has valid AWS credentials, but no read permissions for given bucket, what happens? Does initialize() succeed? (I can test this if needed) - What needs to be done before we can commit this patch (besides the LOG.isDebugEnabled thing)? I'd like to get this basic support in the feature branch so [~eddyxu] and [~liuml07] can integrate with it. I agree we need to address security and add tests to demonstrate its correctness. I'd be happy to take a followup JIRA on that as well, or we can hold this patch up. > S3Guard: S3AFileSystem Integration with MetadataStore > - > > Key: HADOOP-13651 > URL: https://issues.apache.org/jira/browse/HADOOP-13651 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri > Attachments: HADOOP-13651-HADOOP-13345.001.patch, > HADOOP-13651-HADOOP-13345.002.patch, HADOOP-13651-HADOOP-13345.003.patch > > > Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata > consistency and caching. > Implementation should have minimal overhead when no MetadataStore is > configured. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15624145#comment-15624145 ] Fei Hui commented on HADOOP-13773: -- anyone review it ? > wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2 > --- > > Key: HADOOP-13773 > URL: https://issues.apache.org/jira/browse/HADOOP-13773 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 2.6.1, 2.7.3 >Reporter: Fei Hui > Fix For: 2.8.0, 2.9.0, 2.7.4 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception
[ https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623893#comment-15623893 ] Andrew Wang commented on HADOOP-13590: -- Couple comments to try and push this forward: * I think the metric should be a MutableGauge instead of just a long. * Exponential back-off is supposed to be randomized within an exponentially increasing interval. * Regarding unit test flakiness, I'm okay with a unit test for just the retry logic, and then another unit test that makes sure it retries at all. IMO we should avoid sleeping in tests whenever possible, since unit tests are supposed to be quick to run. > Retry until TGT expires even if the UGI renewal thread encountered exception > > > Key: HADOOP-13590 > URL: https://issues.apache.org/jira/browse/HADOOP-13590 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.8.0, 2.7.3, 2.6.4 >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, > HADOOP-13590.03.patch, HADOOP-13590.04.patch, HADOOP-13590.05.patch, > HADOOP-13590.06.patch, HADOOP-13590.07.patch > > > The UGI has a background thread to renew the tgt. On exception, it > [terminates > itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014] > If something temporarily goes wrong that results in an IOE, even if it > recovered no renewal will be done and client will eventually fail to > authenticate. We should retry with our best effort, until tgt expires, in the > hope that the error recovers before that. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-8500) Fix javadoc jars to not contain entire target directory
[ https://issues.apache.org/jira/browse/HADOOP-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623847#comment-15623847 ] Hudson commented on HADOOP-8500: SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10739 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10739/]) HADOOP-8500. Fix javadoc jars to not contain entire target directory. (wang: rev 7ba74befcff2f1836c2d5123d64e92a3c7a8898c) * (edit) hadoop-project/pom.xml * (edit) hadoop-dist/pom.xml > Fix javadoc jars to not contain entire target directory > --- > > Key: HADOOP-8500 > URL: https://issues.apache.org/jira/browse/HADOOP-8500 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.0.0-alpha > Environment: N/A >Reporter: EJ Ciramella >Assignee: Andrew Wang >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-8500.001.patch, HADOOP-8500.002.patch, > HADOOP-8500.003.patch, HADOOP-8500.patch, site-redo.tar > > Original Estimate: 24h > Remaining Estimate: 24h > > The javadoc jars contain the contents of the target directory - which > includes classes and all sorts of binary files that it shouldn't. > Sometimes the resulting javadoc jar is 10X bigger than it should be. > The fix is to reconfigure maven to use "api" as it's destDir for javadoc > generation. > I have a patch/diff incoming. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-8500) Fix javadoc jars to not contain entire target directory
[ https://issues.apache.org/jira/browse/HADOOP-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-8500: -- Hadoop Flags: Reviewed > Fix javadoc jars to not contain entire target directory > --- > > Key: HADOOP-8500 > URL: https://issues.apache.org/jira/browse/HADOOP-8500 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.0.0-alpha > Environment: N/A >Reporter: EJ Ciramella >Assignee: Andrew Wang >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-8500.001.patch, HADOOP-8500.002.patch, > HADOOP-8500.003.patch, HADOOP-8500.patch, site-redo.tar > > Original Estimate: 24h > Remaining Estimate: 24h > > The javadoc jars contain the contents of the target directory - which > includes classes and all sorts of binary files that it shouldn't. > Sometimes the resulting javadoc jar is 10X bigger than it should be. > The fix is to reconfigure maven to use "api" as it's destDir for javadoc > generation. > I have a patch/diff incoming. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-10300) Allowed deferred sending of call responses
[ https://issues.apache.org/jira/browse/HADOOP-10300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623812#comment-15623812 ] Hadoop QA commented on HADOOP-10300: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 53s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 20s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} branch-2.7 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 44s{color} | {color:red} hadoop-common-project/hadoop-common in branch-2.7 has 3 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 34s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 7s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 7s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 29s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 222 unchanged - 5 fixed = 223 total (was 227) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 3707 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 1m 41s{color} | {color:red} The patch 113 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 0s{color} | {color:red} hadoop-common in the patch failed with JDK v1.7.0_111. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 90m 15s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_111 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics | | | hadoop.util.bloom.TestBloomFilters | | JDK v1.8.0_111 Timed out junit tests | org.apache.hadoop.conf.TestConfiguration | | JDK v1.7.0_111 Failed junit tests | hadoop.ipc.TestDecayRpcScheduler | | | hadoop.util.bloom.TestBloomFilters | | JDK v1.7.0_111 Timed out junit tests |
[jira] [Updated] (HADOOP-8500) Fix javadoc jars to not contain entire target directory
[ https://issues.apache.org/jira/browse/HADOOP-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-8500: Resolution: Fixed Fix Version/s: 3.0.0-alpha2 Release Note: Hadoop's javadoc jars should be significantly smaller, and contain only javadoc. As a related cleanup, the dummy hadoop-dist-* jars are no longer generated as part of the build. Status: Resolved (was: Patch Available) Committed to trunk, thanks Xiao for reviewing! > Fix javadoc jars to not contain entire target directory > --- > > Key: HADOOP-8500 > URL: https://issues.apache.org/jira/browse/HADOOP-8500 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.0.0-alpha > Environment: N/A >Reporter: EJ Ciramella >Assignee: Andrew Wang >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-8500.001.patch, HADOOP-8500.002.patch, > HADOOP-8500.003.patch, HADOOP-8500.patch, site-redo.tar > > Original Estimate: 24h > Remaining Estimate: 24h > > The javadoc jars contain the contents of the target directory - which > includes classes and all sorts of binary files that it shouldn't. > Sometimes the resulting javadoc jar is 10X bigger than it should be. > The fix is to reconfigure maven to use "api" as it's destDir for javadoc > generation. > I have a patch/diff incoming. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-8500) Fix javadoc jars to not contain entire target directory
[ https://issues.apache.org/jira/browse/HADOOP-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-8500: Summary: Fix javadoc jars to not contain entire target directory (was: Javadoc jars contain entire target directory) > Fix javadoc jars to not contain entire target directory > --- > > Key: HADOOP-8500 > URL: https://issues.apache.org/jira/browse/HADOOP-8500 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.0.0-alpha > Environment: N/A >Reporter: EJ Ciramella >Assignee: Andrew Wang >Priority: Minor > Attachments: HADOOP-8500.001.patch, HADOOP-8500.002.patch, > HADOOP-8500.003.patch, HADOOP-8500.patch, site-redo.tar > > Original Estimate: 24h > Remaining Estimate: 24h > > The javadoc jars contain the contents of the target directory - which > includes classes and all sorts of binary files that it shouldn't. > Sometimes the resulting javadoc jar is 10X bigger than it should be. > The fix is to reconfigure maven to use "api" as it's destDir for javadoc > generation. > I have a patch/diff incoming. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies
[ https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623728#comment-15623728 ] Andrew Wang commented on HADOOP-11804: -- Thanks for working on this Sean! Not sure why the patch didn't apply for precommit, since it applied for me locally. Since it modifies hadoop-maven-plugins, I "mvn install"'d it first per the normal build instructions. Some review comments: * Typos: "hte" "htey" "dependnecies" "itis" * I see this comment: {{skip org.apache.avro:avro-ipc because it doesn't look like hadoop-common actually uses it}}. If there are other issues like this, it'd be nice to surface them in a JIRA comment or JIRA so we can think about fixing them properly. * The build failed for me with this error: {noformat} [WARNING] Rule 1: org.apache.maven.plugins.enforcer.BanDuplicateClasses failed with message: Duplicate classes found: Found in: org.apache.hadoop:hadoop-client-api:jar:3.0.0-alpha2-SNAPSHOT:compile org.apache.hadoop:hadoop-client-minicluster:jar:3.0.0-alpha2-SNAPSHOT:compile Duplicate classes: org/apache/hadoop/ipc/protobuf/TestRpcServiceProtos$TestProtobufRpc2Proto.class org/apache/hadoop/ipc/protobuf/TestProtos$EmptyResponseProto$Builder.class org/apache/hadoop/ipc/protobuf/TestProtos$1.class org/apache/hadoop/ipc/protobuf/TestProtos$SleepResponseProto$1.class org/apache/hadoop/ipc/protobuf/TestRpcServiceProtos$OldProtobufRpcProto$Interface.class org/apache/hadoop/ipc/protobuf/TestRpcServiceProtos$NewProtobufRpcProto$BlockingInterface.class org/apache/hadoop/ipc/protobuf/TestProtos$AuthMethodResponseProto$1.class ... {noformat} trunk is a fast moving target, so if you give me your git hash, I can review against that. > POC Hadoop Client w/o transitive dependencies > - > > Key: HADOOP-11804 > URL: https://issues.apache.org/jira/browse/HADOOP-11804 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Reporter: Sean Busbey >Assignee: Sean Busbey > Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, > HADOOP-11804.3.patch, HADOOP-11804.4.patch > > > make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to > talk with a Hadoop cluster without seeing any of the implementation > dependencies. > see proposal on parent for details. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-8500) Javadoc jars contain entire target directory
[ https://issues.apache.org/jira/browse/HADOOP-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623690#comment-15623690 ] Xiao Chen commented on HADOOP-8500: --- Thanks Andrew, +1 from me. > Javadoc jars contain entire target directory > > > Key: HADOOP-8500 > URL: https://issues.apache.org/jira/browse/HADOOP-8500 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.0.0-alpha > Environment: N/A >Reporter: EJ Ciramella >Assignee: Andrew Wang >Priority: Minor > Attachments: HADOOP-8500.001.patch, HADOOP-8500.002.patch, > HADOOP-8500.003.patch, HADOOP-8500.patch, site-redo.tar > > Original Estimate: 24h > Remaining Estimate: 24h > > The javadoc jars contain the contents of the target directory - which > includes classes and all sorts of binary files that it shouldn't. > Sometimes the resulting javadoc jar is 10X bigger than it should be. > The fix is to reconfigure maven to use "api" as it's destDir for javadoc > generation. > I have a patch/diff incoming. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-8500) Javadoc jars contain entire target directory
[ https://issues.apache.org/jira/browse/HADOOP-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623681#comment-15623681 ] Hadoop QA commented on HADOOP-8500: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 11s{color} | {color:green} hadoop-dist in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 28m 9s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-8500 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12836238/HADOOP-8500.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux c40efd284c9b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 90dd3a8 | | Default Java | 1.8.0_101 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10937/testReport/ | | modules | C: hadoop-project hadoop-dist U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10937/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Javadoc jars contain entire target directory > > > Key: HADOOP-8500 > URL: https://issues.apache.org/jira/browse/HADOOP-8500 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.0.0-alpha > Environment: N/A >Reporter: EJ Ciramella >Assignee: Andrew Wang >Priority: Minor > Attachments: HADOOP-8500.001.patch,
[jira] [Commented] (HADOOP-13410) RunJar adds the content of the jar twice to the classpath
[ https://issues.apache.org/jira/browse/HADOOP-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623648#comment-15623648 ] Sangjin Lee commented on HADOOP-13410: -- Opened HADOOP-13776. > RunJar adds the content of the jar twice to the classpath > - > > Key: HADOOP-13410 > URL: https://issues.apache.org/jira/browse/HADOOP-13410 > Project: Hadoop Common > Issue Type: Bug > Components: util >Reporter: Sangjin Lee >Assignee: Yuanbo Liu > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-13410.001.patch, HADOOP-13410.002.patch > > > Today when you run a "hadoop jar" command, the jar is unzipped to a temporary > location and gets added to the classloader. > However, the original jar itself is still added to the classpath. > {code} > List classPath = new ArrayList<>(); > classPath.add(new File(workDir + "/").toURI().toURL()); > classPath.add(file.toURI().toURL()); > classPath.add(new File(workDir, "classes/").toURI().toURL()); > File[] libs = new File(workDir, "lib").listFiles(); > if (libs != null) { > for (File lib : libs) { > classPath.add(lib.toURI().toURL()); > } > } > {code} > As a result, the contents of the jar are present in the classpath *twice* and > are completely redundant. Although this does not necessarily cause > correctness issues, some stricter code written to require a single presence > of files may fail. > I cannot think of a good reason why the jar should be added to the classpath > if the unjarred content was added to it. I think we should remove the jar > from the classpath. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13234) Get random port by new ServerSocket(0).getLocalPort() in ServerSocketUtil#getPort
[ https://issues.apache.org/jira/browse/HADOOP-13234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HADOOP-13234: -- Resolution: Won't Fix Status: Resolved (was: Patch Available) > Get random port by new ServerSocket(0).getLocalPort() in > ServerSocketUtil#getPort > - > > Key: HADOOP-13234 > URL: https://issues.apache.org/jira/browse/HADOOP-13234 > Project: Hadoop Common > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HADOOP-13234-002.patch, HADOOP-13234.patch > > > As per [~iwasakims] comment from > [here|https://issues.apache.org/jira/browse/HDFS-10367?focusedCommentId=15275604=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15275604] > we can get available random port by {{new ServerSocket(0).getLocalPort()}} > and it's more portable. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13234) Get random port by new ServerSocket(0).getLocalPort() in ServerSocketUtil#getPort
[ https://issues.apache.org/jira/browse/HADOOP-13234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623647#comment-15623647 ] Masatake Iwasaki commented on HADOOP-13234: --- bq. So we can close this issue..? and to adress the following which is mentioned by Xiaoyu Yao, need to increase the retry..? Yeah. I'm closing this as won't fix. Feel free to reopen this if you need. > Get random port by new ServerSocket(0).getLocalPort() in > ServerSocketUtil#getPort > - > > Key: HADOOP-13234 > URL: https://issues.apache.org/jira/browse/HADOOP-13234 > Project: Hadoop Common > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HADOOP-13234-002.patch, HADOOP-13234.patch > > > As per [~iwasakims] comment from > [here|https://issues.apache.org/jira/browse/HDFS-10367?focusedCommentId=15275604=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15275604] > we can get available random port by {{new ServerSocket(0).getLocalPort()}} > and it's more portable. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13776) remove redundant classpath entries in RunJar
[ https://issues.apache.org/jira/browse/HADOOP-13776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated HADOOP-13776: - Status: Patch Available (was: Open) > remove redundant classpath entries in RunJar > > > Key: HADOOP-13776 > URL: https://issues.apache.org/jira/browse/HADOOP-13776 > Project: Hadoop Common > Issue Type: Bug > Components: util >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Attachments: HADOOP-13776.01.patch > > > Today when you run a "hadoop jar" command, the content of the jar gets added > to the classpath twice, once in the jar form, and again in an unpacked form. > We should include the content of the jar to the classpath only once. We > should keep the jar in the classpath (to support {{setJarByClass}} and other > useful use cases) but remove the root of the unpacked directory. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13776) remove redundant classpath entries in RunJar
[ https://issues.apache.org/jira/browse/HADOOP-13776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated HADOOP-13776: - Attachment: HADOOP-13776.01.patch > remove redundant classpath entries in RunJar > > > Key: HADOOP-13776 > URL: https://issues.apache.org/jira/browse/HADOOP-13776 > Project: Hadoop Common > Issue Type: Bug > Components: util >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Attachments: HADOOP-13776.01.patch > > > Today when you run a "hadoop jar" command, the content of the jar gets added > to the classpath twice, once in the jar form, and again in an unpacked form. > We should include the content of the jar to the classpath only once. We > should keep the jar in the classpath (to support {{setJarByClass}} and other > useful use cases) but remove the root of the unpacked directory. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13776) remove redundant classpath entries in RunJar
Sangjin Lee created HADOOP-13776: Summary: remove redundant classpath entries in RunJar Key: HADOOP-13776 URL: https://issues.apache.org/jira/browse/HADOOP-13776 Project: Hadoop Common Issue Type: Bug Components: util Reporter: Sangjin Lee Assignee: Sangjin Lee Today when you run a "hadoop jar" command, the content of the jar gets added to the classpath twice, once in the jar form, and again in an unpacked form. We should include the content of the jar to the classpath only once. We should keep the jar in the classpath (to support {{setJarByClass}} and other useful use cases) but remove the root of the unpacked directory. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-8500) Javadoc jars contain entire target directory
[ https://issues.apache.org/jira/browse/HADOOP-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-8500: Attachment: HADOOP-8500.003.patch My bad, should have tested install too. 003 disables the install plugin too, since we don't need to install these dummy artifacts. > Javadoc jars contain entire target directory > > > Key: HADOOP-8500 > URL: https://issues.apache.org/jira/browse/HADOOP-8500 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.0.0-alpha > Environment: N/A >Reporter: EJ Ciramella >Assignee: Andrew Wang >Priority: Minor > Attachments: HADOOP-8500.001.patch, HADOOP-8500.002.patch, > HADOOP-8500.003.patch, HADOOP-8500.patch, site-redo.tar > > Original Estimate: 24h > Remaining Estimate: 24h > > The javadoc jars contain the contents of the target directory - which > includes classes and all sorts of binary files that it shouldn't. > Sometimes the resulting javadoc jar is 10X bigger than it should be. > The fix is to reconfigure maven to use "api" as it's destDir for javadoc > generation. > I have a patch/diff incoming. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-10300) Allowed deferred sending of call responses
[ https://issues.apache.org/jira/browse/HADOOP-10300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623585#comment-15623585 ] Hadoop QA commented on HADOOP-10300: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 26s{color} | {color:red} root in branch-2.7.0 failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 19s{color} | {color:red} root in branch-2.7.0 failed with JDK v1.8.0_111. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 21s{color} | {color:red} root in branch-2.7.0 failed with JDK v1.7.0_111. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} branch-2.7.0 passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 9s{color} | {color:red} hadoop-common in branch-2.7.0 failed. {color} | | {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red} 0m 9s{color} | {color:red} hadoop-common in branch-2.7.0 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 8s{color} | {color:red} hadoop-common in branch-2.7.0 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 7s{color} | {color:red} hadoop-common in branch-2.7.0 failed with JDK v1.8.0_111. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 8s{color} | {color:red} hadoop-common in branch-2.7.0 failed with JDK v1.7.0_111. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 8s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 18s{color} | {color:red} root in the patch failed with JDK v1.8.0_111. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 18s{color} | {color:red} root in the patch failed with JDK v1.8.0_111. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 23s{color} | {color:red} root in the patch failed with JDK v1.7.0_111. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 23s{color} | {color:red} root in the patch failed with JDK v1.7.0_111. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 27s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 226 unchanged - 5 fixed = 227 total (was 231) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 10s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red} 0m 10s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s{color} | {color:red} The patch has 1685 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 7s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 6s{color} | {color:red} hadoop-common in the patch failed with JDK v1.8.0_111. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 6s{color} | {color:red} hadoop-common in the patch failed with JDK v1.7.0_111. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 6s{color} | {color:red} hadoop-common in the patch failed with JDK v1.7.0_111. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 7m 5s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:date2016-10-31 | | JIRA Issue | HADOOP-10300 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12836217/HADOOP-10300-branch-2.7.0.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux db2d48d9db78 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3
[jira] [Updated] (HADOOP-10300) Allowed deferred sending of call responses
[ https://issues.apache.org/jira/browse/HADOOP-10300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HADOOP-10300: --- Attachment: (was: HADOOP-10300-branch-2.7.0.patch) > Allowed deferred sending of call responses > -- > > Key: HADOOP-10300 > URL: https://issues.apache.org/jira/browse/HADOOP-10300 > Project: Hadoop Common > Issue Type: Sub-task > Components: ipc >Affects Versions: 2.0.0-alpha, 3.0.0-alpha1 >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Labels: BB2015-05-TBR > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HADOOP-10300-branch-2.7.patch, HADOOP-10300.patch, > HADOOP-10300.patch, HADOOP-10300.patch > > > RPC handlers currently do not return until the RPC call completes and > response is sent, or a partially sent response has been queued for the > responder. It would be useful for a proxy method to notify the handler to > not yet the send the call's response. > An potential use case is a namespace handler in the NN might want to return > before the edit log is synced so it can service more requests and allow > increased batching of edits per sync. Background syncing could later trigger > the sending of the call response to the client. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-10300) Allowed deferred sending of call responses
[ https://issues.apache.org/jira/browse/HADOOP-10300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HADOOP-10300: --- Attachment: HADOOP-10300-branch-2.7.patch > Allowed deferred sending of call responses > -- > > Key: HADOOP-10300 > URL: https://issues.apache.org/jira/browse/HADOOP-10300 > Project: Hadoop Common > Issue Type: Sub-task > Components: ipc >Affects Versions: 2.0.0-alpha, 3.0.0-alpha1 >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Labels: BB2015-05-TBR > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HADOOP-10300-branch-2.7.patch, HADOOP-10300.patch, > HADOOP-10300.patch, HADOOP-10300.patch > > > RPC handlers currently do not return until the RPC call completes and > response is sent, or a partially sent response has been queued for the > responder. It would be useful for a proxy method to notify the handler to > not yet the send the call's response. > An potential use case is a namespace handler in the NN might want to return > before the edit log is synced so it can service more requests and allow > increased batching of edits per sync. Background syncing could later trigger > the sending of the call response to the client. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13410) RunJar adds the content of the jar twice to the classpath
[ https://issues.apache.org/jira/browse/HADOOP-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623567#comment-15623567 ] Andrew Wang commented on HADOOP-13410: -- Thanks for the clarification Sangjin. Yea, I think we need a new JIRA. Even though we missed setting the fix version for the 3.0.0-alpha1 changelog, we can still set it now so it's correct in JIRA. If you want, we could reuse the revert JIRA (HADOOP-13620) for the second patch since alpha2 hasn't gone out yet, but it's probably more clear to just use a new JIRA. > RunJar adds the content of the jar twice to the classpath > - > > Key: HADOOP-13410 > URL: https://issues.apache.org/jira/browse/HADOOP-13410 > Project: Hadoop Common > Issue Type: Bug > Components: util >Reporter: Sangjin Lee >Assignee: Yuanbo Liu > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-13410.001.patch, HADOOP-13410.002.patch > > > Today when you run a "hadoop jar" command, the jar is unzipped to a temporary > location and gets added to the classloader. > However, the original jar itself is still added to the classpath. > {code} > List classPath = new ArrayList<>(); > classPath.add(new File(workDir + "/").toURI().toURL()); > classPath.add(file.toURI().toURL()); > classPath.add(new File(workDir, "classes/").toURI().toURL()); > File[] libs = new File(workDir, "lib").listFiles(); > if (libs != null) { > for (File lib : libs) { > classPath.add(lib.toURI().toURL()); > } > } > {code} > As a result, the contents of the jar are present in the classpath *twice* and > are completely redundant. Although this does not necessarily cause > correctness issues, some stricter code written to require a single presence > of files may fail. > I cannot think of a good reason why the jar should be added to the classpath > if the unjarred content was added to it. I think we should remove the jar > from the classpath. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-8500) Javadoc jars contain entire target directory
[ https://issues.apache.org/jira/browse/HADOOP-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623562#comment-15623562 ] Xiao Chen commented on HADOOP-8500: --- Thanks Andrew for the explanation! I haven't heard of any use case of the hadoop-dist javadoc jar either. So +1 to remove the dummy jars. It seems {{mvn clean install -DskipTests}} is broken though. :( > Javadoc jars contain entire target directory > > > Key: HADOOP-8500 > URL: https://issues.apache.org/jira/browse/HADOOP-8500 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.0.0-alpha > Environment: N/A >Reporter: EJ Ciramella >Assignee: Andrew Wang >Priority: Minor > Attachments: HADOOP-8500.001.patch, HADOOP-8500.002.patch, > HADOOP-8500.patch, site-redo.tar > > Original Estimate: 24h > Remaining Estimate: 24h > > The javadoc jars contain the contents of the target directory - which > includes classes and all sorts of binary files that it shouldn't. > Sometimes the resulting javadoc jar is 10X bigger than it should be. > The fix is to reconfigure maven to use "api" as it's destDir for javadoc > generation. > I have a patch/diff incoming. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-8500) Javadoc jars contain entire target directory
[ https://issues.apache.org/jira/browse/HADOOP-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623550#comment-15623550 ] Hadoop QA commented on HADOOP-8500: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 10s{color} | {color:red} hadoop-dist in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s{color} | {color:green} hadoop-dist in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 30m 41s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-8500 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12836229/HADOOP-8500.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux dd6a87b8944e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a1761a8 | | Default Java | 1.8.0_101 | | mvninstall | https://builds.apache.org/job/PreCommit-HADOOP-Build/10933/artifact/patchprocess/patch-mvninstall-hadoop-dist.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10933/testReport/ | | modules | C: hadoop-project hadoop-dist U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10933/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Javadoc jars contain entire target directory > > > Key: HADOOP-8500 > URL: https://issues.apache.org/jira/browse/HADOOP-8500 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.0.0-alpha > Environment: N/A >
[jira] [Commented] (HADOOP-10300) Allowed deferred sending of call responses
[ https://issues.apache.org/jira/browse/HADOOP-10300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623496#comment-15623496 ] Hadoop QA commented on HADOOP-10300: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 50s{color} | {color:red} root in branch-2.7.0 failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 27s{color} | {color:red} root in branch-2.7.0 failed with JDK v1.8.0_111. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 22s{color} | {color:red} root in branch-2.7.0 failed with JDK v1.7.0_111. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} branch-2.7.0 passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 11s{color} | {color:red} hadoop-common in branch-2.7.0 failed. {color} | | {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red} 0m 10s{color} | {color:red} hadoop-common in branch-2.7.0 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 8s{color} | {color:red} hadoop-common in branch-2.7.0 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 8s{color} | {color:red} hadoop-common in branch-2.7.0 failed with JDK v1.8.0_111. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 8s{color} | {color:red} hadoop-common in branch-2.7.0 failed with JDK v1.7.0_111. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 8s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 18s{color} | {color:red} root in the patch failed with JDK v1.8.0_111. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 18s{color} | {color:red} root in the patch failed with JDK v1.8.0_111. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 23s{color} | {color:red} root in the patch failed with JDK v1.7.0_111. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 23s{color} | {color:red} root in the patch failed with JDK v1.7.0_111. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 23s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 226 unchanged - 5 fixed = 227 total (was 231) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 9s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red} 0m 9s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s{color} | {color:red} The patch has 1334 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 35s{color} | {color:red} The patch 59 line(s) with tabs. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 8s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 6s{color} | {color:red} hadoop-common in the patch failed with JDK v1.8.0_111. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 8s{color} | {color:red} hadoop-common in the patch failed with JDK v1.7.0_111. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 8s{color} | {color:red} hadoop-common in the patch failed with JDK v1.7.0_111. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 47m 53s{color} | {color:red} The patch generated 91 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 27s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:date2016-10-31 | | JIRA Issue | HADOOP-10300 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12836217/HADOOP-10300-branch-2.7.0.patch | | Optional Tests | asflicense compile javac javadoc mvninstall
[jira] [Commented] (HADOOP-12554) Swift client to read credentials from a credential provider
[ https://issues.apache.org/jira/browse/HADOOP-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623484#comment-15623484 ] Larry McCay commented on HADOOP-12554: -- [~ste...@apache.org] - is the following assertion from @ramtin - sufficient assertion to that fact: bq. Provided path supports Identity API v3 (HADOOP-12525 is prerequisite) Steve Loughran, The patch tested against IBM Bluemix Object Storage that uses SoftLayer with Dallas region. It was against a previous patch and unit tests need to be added to yet another but just trying to get the minimum requirements for getting this committed. As you also mentioned before, we also need docs for it. > Swift client to read credentials from a credential provider > --- > > Key: HADOOP-12554 > URL: https://issues.apache.org/jira/browse/HADOOP-12554 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/swift >Affects Versions: 2.7.1 >Reporter: Steve Loughran >Assignee: ramtin >Priority: Minor > Attachments: HADOOP-12554.001.patch, HADOOP-12554.002.patch > > > As HADOOP-12548 is going to do for s3, Swift should be reading credentials, > particularly passwords, from a credential provider. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13775) KMS Client does not encode key names in the URL path correctly
[ https://issues.apache.org/jira/browse/HADOOP-13775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13775: --- Status: Patch Available (was: Open) > KMS Client does not encode key names in the URL path correctly > -- > > Key: HADOOP-13775 > URL: https://issues.apache.org/jira/browse/HADOOP-13775 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.6.0 >Reporter: Xiao Chen >Assignee: Xiao Chen > Labels: supportability > Attachments: HADOOP-13775.01.patch > > > HADOOP-12962 fixed the KMS server to encode special characters correctly when > they're part of the > [URI|https://docs.oracle.com/javase/7/docs/api/java/net/URI.html] query. > It turns out they can also cause trouble when being the URI path. This time > it's the client-side that's broken. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-8500) Javadoc jars contain entire target directory
[ https://issues.apache.org/jira/browse/HADOOP-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-8500: Attachment: HADOOP-8500.002.patch Patch attached, I disable the artifacts attached by the "dist" profile by binding the executions to a non-existent phase. > Javadoc jars contain entire target directory > > > Key: HADOOP-8500 > URL: https://issues.apache.org/jira/browse/HADOOP-8500 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.0.0-alpha > Environment: N/A >Reporter: EJ Ciramella >Assignee: Andrew Wang >Priority: Minor > Attachments: HADOOP-8500.001.patch, HADOOP-8500.002.patch, > HADOOP-8500.patch, site-redo.tar > > Original Estimate: 24h > Remaining Estimate: 24h > > The javadoc jars contain the contents of the target directory - which > includes classes and all sorts of binary files that it shouldn't. > Sometimes the resulting javadoc jar is 10X bigger than it should be. > The fix is to reconfigure maven to use "api" as it's destDir for javadoc > generation. > I have a patch/diff incoming. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13775) KMS Client does not encode key names in the URL path correctly
[ https://issues.apache.org/jira/browse/HADOOP-13775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13775: --- Attachment: HADOOP-13775.01.patch The current user of Java's [URLEncoder|https://docs.oracle.com/javase/7/docs/api/java/net/URLEncoder.html] is incorrect, as that's for form encoding. Since ops like [getKeyMetadata|http://hadoop.apache.org/docs/r3.0.0-alpha1/hadoop-kms/index.html#Get_Key_Metadata], the key name is part of the URL path, it should be URL-encoded. An example would be key name with a space ' ' should be encoded as '%20' (URL encoded), not '+' (form encoded). Updated unit tests too. > KMS Client does not encode key names in the URL path correctly > -- > > Key: HADOOP-13775 > URL: https://issues.apache.org/jira/browse/HADOOP-13775 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.6.0 >Reporter: Xiao Chen >Assignee: Xiao Chen > Labels: supportability > Attachments: HADOOP-13775.01.patch > > > HADOOP-12962 fixed the KMS server to encode special characters correctly when > they're part of the > [URI|https://docs.oracle.com/javase/7/docs/api/java/net/URI.html] query. > It turns out they can also cause trouble when being the URI path. This time > it's the client-side that's broken. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12554) Swift client to read credentials from a credential provider
[ https://issues.apache.org/jira/browse/HADOOP-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623466#comment-15623466 ] Steve Loughran commented on HADOOP-12554: - well, if there is a new auth mech, I'll need you to assert that you have at least set this up as your login and run all the swift tests with it. > Swift client to read credentials from a credential provider > --- > > Key: HADOOP-12554 > URL: https://issues.apache.org/jira/browse/HADOOP-12554 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/swift >Affects Versions: 2.7.1 >Reporter: Steve Loughran >Assignee: ramtin >Priority: Minor > Attachments: HADOOP-12554.001.patch, HADOOP-12554.002.patch > > > As HADOOP-12548 is going to do for s3, Swift should be reading credentials, > particularly passwords, from a credential provider. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13680) fs.s3a.readahead.range to use getLongBytes
[ https://issues.apache.org/jira/browse/HADOOP-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623456#comment-15623456 ] Hudson commented on HADOOP-13680: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10735 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10735/]) HADOOP-13680. fs.s3a.readahead.range to use getLongBytes. Contributed by (stevel: rev a1761a841e95ef7d2296ac3e40b3a26d97787eab) * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java * (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java * (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml > fs.s3a.readahead.range to use getLongBytes > -- > > Key: HADOOP-13680 > URL: https://issues.apache.org/jira/browse/HADOOP-13680 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Abhishek Modi > Fix For: 2.8.0 > > Attachments: HADOOP-13680-branch-2-004.patch, HADOOP-13680.001.patch > > > The {{fs.s3a.readahead.range}} value is measured in bytes, but can be > hundreds of KB. Easier to use getLongBytes and set to things like "300k" > This will be backwards compatible with the existing settings if anyone is > using them, because the no-prefix default will still be bytes -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13410) RunJar adds the content of the jar twice to the classpath
[ https://issues.apache.org/jira/browse/HADOOP-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623442#comment-15623442 ] Sangjin Lee commented on HADOOP-13410: -- Sorry it is bit confusing. The original commit for this JIRA did make alpha 1 (although the fix version was not marked as such because I thought mistakenly it was committed too late for alpha 1). Then [~bibinchundatt] found it breaks MR job submission, and it was reverted in HADOOP-13620 which is on alpha 2 (the issue was discovered after alpha 1 was released). I then reopened this JIRA to fix the original issue correctly. That's what the 2nd patch is about. [~andrew.wang], I take that your suggestion is not to reuse this JIRA but open a new one to fix this issue correctly? > RunJar adds the content of the jar twice to the classpath > - > > Key: HADOOP-13410 > URL: https://issues.apache.org/jira/browse/HADOOP-13410 > Project: Hadoop Common > Issue Type: Bug > Components: util >Reporter: Sangjin Lee >Assignee: Yuanbo Liu > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-13410.001.patch, HADOOP-13410.002.patch > > > Today when you run a "hadoop jar" command, the jar is unzipped to a temporary > location and gets added to the classloader. > However, the original jar itself is still added to the classpath. > {code} > List classPath = new ArrayList<>(); > classPath.add(new File(workDir + "/").toURI().toURL()); > classPath.add(file.toURI().toURL()); > classPath.add(new File(workDir, "classes/").toURI().toURL()); > File[] libs = new File(workDir, "lib").listFiles(); > if (libs != null) { > for (File lib : libs) { > classPath.add(lib.toURI().toURL()); > } > } > {code} > As a result, the contents of the jar are present in the classpath *twice* and > are completely redundant. Although this does not necessarily cause > correctness issues, some stricter code written to require a single presence > of files may fail. > I cannot think of a good reason why the jar should be added to the classpath > if the unjarred content was added to it. I think we should remove the jar > from the classpath. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13775) KMS Client does not encode key names in the URL path correctly
[ https://issues.apache.org/jira/browse/HADOOP-13775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13775: --- Labels: supportability (was: ) > KMS Client does not encode key names in the URL path correctly > -- > > Key: HADOOP-13775 > URL: https://issues.apache.org/jira/browse/HADOOP-13775 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.6.0 >Reporter: Xiao Chen >Assignee: Xiao Chen > Labels: supportability > > HADOOP-12962 fixed the KMS server to encode special characters correctly when > they're part of the > [URI|https://docs.oracle.com/javase/7/docs/api/java/net/URI.html] query. > It turns out they can also cause trouble when being the URI path. This time > it's the client-side that's broken. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13775) KMS Client does not encode key names in the URL path correctly
Xiao Chen created HADOOP-13775: -- Summary: KMS Client does not encode key names in the URL path correctly Key: HADOOP-13775 URL: https://issues.apache.org/jira/browse/HADOOP-13775 Project: Hadoop Common Issue Type: Bug Components: kms Affects Versions: 2.6.0 Reporter: Xiao Chen Assignee: Xiao Chen HADOOP-12962 fixed the KMS server to encode special characters correctly when they're part of the [URI|https://docs.oracle.com/javase/7/docs/api/java/net/URI.html] query. It turns out they can also cause trouble when being the URI path. This time it's the client-side that's broken. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13774) Rest Loaded App fails
[ https://issues.apache.org/jira/browse/HADOOP-13774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Omar Bouras updated HADOOP-13774: - Attachment: WordCount.java app.json > Rest Loaded App fails > - > > Key: HADOOP-13774 > URL: https://issues.apache.org/jira/browse/HADOOP-13774 > Project: Hadoop Common > Issue Type: Bug > Environment: Hadoop Map Reduce REST >Reporter: Omar Bouras >Priority: Minor > Fix For: 2.7.3 > > Attachments: WordCount.java, app.json > > > Hello, > I am launching an app within a MR REST. This app executes well within Hadoop > normal invocation. However, when I use the rest > {code} > curl -i -X POST -H 'Accept: application/json' -H 'Content-Type: > application/json' http://localhost:8088/ws/v1/cluster/apps?user.name=exo -d > @app.json > {code} > 6/10/31 21:42:13 INFO client.RMProxy: Connecting to ResourceManager at > /0.0.0.0:8032 > 16/10/31 21:42:14 INFO input.FileInputFormat: Total input paths to process : 4 > 16/10/31 21:42:14 INFO mapreduce.JobSubmitter: number of splits:4 > 16/10/31 21:42:14 INFO mapreduce.JobSubmitter: Submitting tokens for job: > job_1477946173138_0003 > 16/10/31 21:42:14 INFO mapreduce.JobSubmitter: Kind: YARN_AM_RM_TOKEN, > Service: , Ident: (appAttemptId { application_id { id: 3 cluster_timestamp: > 1477946173138 } attemptId: 1 } keyId: -678745738) > 16/10/31 21:42:15 INFO impl.YarnClientImpl: Submitted application > application_1477946173138_0003 > 16/10/31 21:42:15 INFO mapreduce.Job: The url to track the job: > http://MEA-029-L:8088/proxy/application_1477946173138_0003/ > 16/10/31 21:42:15 INFO mapreduce.Job: Running job: job_1477946173138_0003 > 16/10/31 21:52:53 INFO mapreduce.Job: Job job_1477946173138_0003 running in > uber mode : false > 16/10/31 21:52:53 INFO mapreduce.Job: map 0% reduce 0% > 16/10/31 21:52:53 INFO mapreduce.Job: Job job_1477946173138_0003 failed with > state FAILED due to: Application application_1477946173138_0003 failed 1 > times due to ApplicationMaster for attempt > appattempt_1477946173138_0003_01 timed out. Failing the application. > 16/10/31 21:52:53 INFO mapreduce.Job: Counters: 0 > {code} > The main job always ends fail with timeout. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13774) Rest Loaded App fails
Omar Bouras created HADOOP-13774: Summary: Rest Loaded App fails Key: HADOOP-13774 URL: https://issues.apache.org/jira/browse/HADOOP-13774 Project: Hadoop Common Issue Type: Bug Environment: Hadoop Map Reduce REST Reporter: Omar Bouras Priority: Minor Fix For: 2.7.3 Hello, I am launching an app within a MR REST. This app executes well within Hadoop normal invocation. However, when I use the rest {code} curl -i -X POST -H 'Accept: application/json' -H 'Content-Type: application/json' http://localhost:8088/ws/v1/cluster/apps?user.name=exo -d @app.json {code} 6/10/31 21:42:13 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 16/10/31 21:42:14 INFO input.FileInputFormat: Total input paths to process : 4 16/10/31 21:42:14 INFO mapreduce.JobSubmitter: number of splits:4 16/10/31 21:42:14 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1477946173138_0003 16/10/31 21:42:14 INFO mapreduce.JobSubmitter: Kind: YARN_AM_RM_TOKEN, Service: , Ident: (appAttemptId { application_id { id: 3 cluster_timestamp: 1477946173138 } attemptId: 1 } keyId: -678745738) 16/10/31 21:42:15 INFO impl.YarnClientImpl: Submitted application application_1477946173138_0003 16/10/31 21:42:15 INFO mapreduce.Job: The url to track the job: http://MEA-029-L:8088/proxy/application_1477946173138_0003/ 16/10/31 21:42:15 INFO mapreduce.Job: Running job: job_1477946173138_0003 16/10/31 21:52:53 INFO mapreduce.Job: Job job_1477946173138_0003 running in uber mode : false 16/10/31 21:52:53 INFO mapreduce.Job: map 0% reduce 0% 16/10/31 21:52:53 INFO mapreduce.Job: Job job_1477946173138_0003 failed with state FAILED due to: Application application_1477946173138_0003 failed 1 times due to ApplicationMaster for attempt appattempt_1477946173138_0003_01 timed out. Failing the application. 16/10/31 21:52:53 INFO mapreduce.Job: Counters: 0 {code} The main job always ends fail with timeout. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12554) Swift client to read credentials from a credential provider
[ https://issues.apache.org/jira/browse/HADOOP-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623415#comment-15623415 ] Jonathan Maron commented on HADOOP-12554: - Probably a good idea for this to move forward given that most swift-enable object stores do require some form of authentication. > Swift client to read credentials from a credential provider > --- > > Key: HADOOP-12554 > URL: https://issues.apache.org/jira/browse/HADOOP-12554 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/swift >Affects Versions: 2.7.1 >Reporter: Steve Loughran >Assignee: ramtin >Priority: Minor > Attachments: HADOOP-12554.001.patch, HADOOP-12554.002.patch > > > As HADOOP-12548 is going to do for s3, Swift should be reading credentials, > particularly passwords, from a credential provider. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13680) fs.s3a.readahead.range to use getLongBytes
[ https://issues.apache.org/jira/browse/HADOOP-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13680: Resolution: Fixed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) +1 committed. Thanks for this! > fs.s3a.readahead.range to use getLongBytes > -- > > Key: HADOOP-13680 > URL: https://issues.apache.org/jira/browse/HADOOP-13680 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Abhishek Modi > Fix For: 2.8.0 > > Attachments: HADOOP-13680-branch-2-004.patch, HADOOP-13680.001.patch > > > The {{fs.s3a.readahead.range}} value is measured in bytes, but can be > hundreds of KB. Easier to use getLongBytes and set to things like "300k" > This will be backwards compatible with the existing settings if anyone is > using them, because the no-prefix default will still be bytes -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12554) Swift client to read credentials from a credential provider
[ https://issues.apache.org/jira/browse/HADOOP-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623384#comment-15623384 ] Larry McCay commented on HADOOP-12554: -- [~ste...@apache.org] and [~ramtinb] - progress has stalled here. [~ste...@apache.org] - you previous comment seems to indicate that we need functional tests for this contribution. Can we get away with just an extension of the Swift configuration testing and extrapolate that out to it working? Also, when you say credential file do you mean a credential provider store (jceks) or the Hadoop credentials object? Either of those would work but the latter would require a different provider configuration. > Swift client to read credentials from a credential provider > --- > > Key: HADOOP-12554 > URL: https://issues.apache.org/jira/browse/HADOOP-12554 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/swift >Affects Versions: 2.7.1 >Reporter: Steve Loughran >Assignee: ramtin >Priority: Minor > Attachments: HADOOP-12554.001.patch, HADOOP-12554.002.patch > > > As HADOOP-12548 is going to do for s3, Swift should be reading credentials, > particularly passwords, from a credential provider. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-8500) Javadoc jars contain entire target directory
[ https://issues.apache.org/jira/browse/HADOOP-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623357#comment-15623357 ] Andrew Wang commented on HADOOP-8500: - So one thing to note is that the various hadoop-dist JARs are dummies, they don't have meaningful contents: {noformat} -> % jar -tf ./hadoop-dist/target/hadoop-dist-3.0.0-alpha2-SNAPSHOT.jar META-INF/ META-INF/MANIFEST.MF META-INF/LICENSE.txt META-INF/NOTICE.txt META-INF/maven/ META-INF/maven/org.apache.hadoop/ META-INF/maven/org.apache.hadoop/hadoop-dist/ META-INF/maven/org.apache.hadoop/hadoop-dist/pom.xml META-INF/maven/org.apache.hadoop/hadoop-dist/pom.properties {noformat} So, I'm not sure what exactly people expect to find in this javadoc jar. Downstreams typically reference our hadoop-client artifact, not hadoop-dist, and the per-module javadocs are still being generated. If anything, it might be a mistake that we're generating JARs in hadoop-dist at all. I believe the purpose of hadoop-dist is to call dist-layout-stitching to assemble the tarball layout from the other modules. However, since it inherits maven-jar-plugin from a parent pom, the default-jar execution is generating these dummy jars. I'll play with a patch that cleans this up too. > Javadoc jars contain entire target directory > > > Key: HADOOP-8500 > URL: https://issues.apache.org/jira/browse/HADOOP-8500 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.0.0-alpha > Environment: N/A >Reporter: EJ Ciramella >Assignee: Andrew Wang >Priority: Minor > Attachments: HADOOP-8500.001.patch, HADOOP-8500.patch, site-redo.tar > > Original Estimate: 24h > Remaining Estimate: 24h > > The javadoc jars contain the contents of the target directory - which > includes classes and all sorts of binary files that it shouldn't. > Sometimes the resulting javadoc jar is 10X bigger than it should be. > The fix is to reconfigure maven to use "api" as it's destDir for javadoc > generation. > I have a patch/diff incoming. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-8065) distcp should have an option to compress data while copying.
[ https://issues.apache.org/jira/browse/HADOOP-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623338#comment-15623338 ] Yongjun Zhang commented on HADOOP-8065: --- HI [~snayakm], Wonder if you are available to continue on this issue? thanks much. --Yongjun > distcp should have an option to compress data while copying. > > > Key: HADOOP-8065 > URL: https://issues.apache.org/jira/browse/HADOOP-8065 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 0.20.2 >Reporter: Suresh Antony >Assignee: Suraj Nayak >Priority: Minor > Labels: distcp > Fix For: 0.20.2 > > Attachments: HADOOP-8065-trunk_2015-11-03.patch, > HADOOP-8065-trunk_2015-11-04.patch, HADOOP-8065-trunk_2016-04-29-4.patch, > patch.distcp.2012-02-10 > > > We would like compress the data while transferring from our source system to > target system. One way to do this is to write a map/reduce job to compress > that after/before being transferred. This looks inefficient. > Since distcp already reading writing data it would be better if it can > accomplish while doing this. > Flip side of this is that distcp -update option can not check file size > before copying data. It can only check for the existence of file. > So I propose if -compress option is given then file size is not checked. > Also when we copy file appropriate extension needs to be added to file > depending on compression type. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-10300) Allowed deferred sending of call responses
[ https://issues.apache.org/jira/browse/HADOOP-10300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HADOOP-10300: --- Attachment: HADOOP-10300-branch-2.7.0.patch > Allowed deferred sending of call responses > -- > > Key: HADOOP-10300 > URL: https://issues.apache.org/jira/browse/HADOOP-10300 > Project: Hadoop Common > Issue Type: Sub-task > Components: ipc >Affects Versions: 2.0.0-alpha, 3.0.0-alpha1 >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Labels: BB2015-05-TBR > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HADOOP-10300-branch-2.7.0.patch, HADOOP-10300.patch, > HADOOP-10300.patch, HADOOP-10300.patch > > > RPC handlers currently do not return until the RPC call completes and > response is sent, or a partially sent response has been queued for the > responder. It would be useful for a proxy method to notify the handler to > not yet the send the call's response. > An potential use case is a namespace handler in the NN might want to return > before the edit log is synced so it can service more requests and allow > increased batching of edits per sync. Background syncing could later trigger > the sending of the call response to the client. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-10300) Allowed deferred sending of call responses
[ https://issues.apache.org/jira/browse/HADOOP-10300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang reopened HADOOP-10300: I think this'd be a good addition to branch-2.7; all other subtasks under the umbrella JIRA are actually in 2.3. Attaching a branch-2.7 patch to trigger Jenkins. [~daryn] [~kihwal] LMK if you have any concerns about the backport. > Allowed deferred sending of call responses > -- > > Key: HADOOP-10300 > URL: https://issues.apache.org/jira/browse/HADOOP-10300 > Project: Hadoop Common > Issue Type: Sub-task > Components: ipc >Affects Versions: 2.0.0-alpha, 3.0.0-alpha1 >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Labels: BB2015-05-TBR > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HADOOP-10300-branch-2.7.0.patch, HADOOP-10300.patch, > HADOOP-10300.patch, HADOOP-10300.patch > > > RPC handlers currently do not return until the RPC call completes and > response is sent, or a partially sent response has been queued for the > responder. It would be useful for a proxy method to notify the handler to > not yet the send the call's response. > An potential use case is a namespace handler in the NN might want to return > before the edit log is synced so it can service more requests and allow > increased batching of edits per sync. Background syncing could later trigger > the sending of the call response to the client. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-10300) Allowed deferred sending of call responses
[ https://issues.apache.org/jira/browse/HADOOP-10300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HADOOP-10300: --- Status: Patch Available (was: Reopened) > Allowed deferred sending of call responses > -- > > Key: HADOOP-10300 > URL: https://issues.apache.org/jira/browse/HADOOP-10300 > Project: Hadoop Common > Issue Type: Sub-task > Components: ipc >Affects Versions: 3.0.0-alpha1, 2.0.0-alpha >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Labels: BB2015-05-TBR > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HADOOP-10300-branch-2.7.0.patch, HADOOP-10300.patch, > HADOOP-10300.patch, HADOOP-10300.patch > > > RPC handlers currently do not return until the RPC call completes and > response is sent, or a partially sent response has been queued for the > responder. It would be useful for a proxy method to notify the handler to > not yet the send the call's response. > An potential use case is a namespace handler in the NN might want to return > before the edit log is synced so it can service more requests and allow > increased batching of edits per sync. Background syncing could later trigger > the sending of the call response to the client. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13717) Normalize daemonization behavior of the diskbalancer with balancer and mover
[ https://issues.apache.org/jira/browse/HADOOP-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623204#comment-15623204 ] Andrew Wang commented on HADOOP-13717: -- Hi [~aw] could I get a review? Trivial one. > Normalize daemonization behavior of the diskbalancer with balancer and mover > > > Key: HADOOP-13717 > URL: https://issues.apache.org/jira/browse/HADOOP-13717 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: HADOOP-13717.001.patch > > > Issue found when working with the HDFS balancer. > In {{hadoop_daemon_handler}}, it calls {{hadoop_verify_logdir}} even for the > "default" case which calls {{hadoop_start_daemon}}. {{daemon_outfile}} which > specifies the log location isn't even used here, since the command is being > started in the foreground. > I think we can push the {{hadoop_verify_logdir}} call down into > {{hadoop_start_daemon_wrapper}} instead, which does use the outfile. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13583) Incorporate checkcompatibility script which runs Java API Compliance Checker
[ https://issues.apache.org/jira/browse/HADOOP-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623206#comment-15623206 ] Andrew Wang commented on HADOOP-13583: -- Hi [~rkanter], mind doing another round of review? > Incorporate checkcompatibility script which runs Java API Compliance Checker > > > Key: HADOOP-13583 > URL: https://issues.apache.org/jira/browse/HADOOP-13583 > Project: Hadoop Common > Issue Type: Improvement > Components: scripts >Affects Versions: 2.6.4 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: HADOOP-13583.001.patch, HADOOP-13583.002.patch, > HADOOP-13583.003.patch > > > Based on discussion at YETUS-445, this code can't go there, but it's still > very useful for release managers. A similar variant of this script has been > used for a while by Apache HBase and Apache Kudu, and IMO JACC output is > easier to understand than JDiff. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13632) Daemonization does not check process liveness before renicing
[ https://issues.apache.org/jira/browse/HADOOP-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623201#comment-15623201 ] Andrew Wang commented on HADOOP-13632: -- Hi [~aw], could I get a review? > Daemonization does not check process liveness before renicing > - > > Key: HADOOP-13632 > URL: https://issues.apache.org/jira/browse/HADOOP-13632 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: HADOOP-13632.001.patch > > > If you try to daemonize a process that is incorrectly configured, it will die > quite quickly. However, the daemonization function will still try to renice > it even if it's down, leading to something like this for my namenode: > {noformat} > -> % bin/hdfs --daemon start namenode > ERROR: Cannot set priority of namenode process 12036 > {noformat} > It'd be more user-friendly instead of this renice error, we said that the > process couldn't be started. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13410) RunJar adds the content of the jar twice to the classpath
[ https://issues.apache.org/jira/browse/HADOOP-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13410: - Resolution: Fixed Status: Resolved (was: Patch Available) Resolving this per above comment, since this was released in 3.0.0-alpha1. > RunJar adds the content of the jar twice to the classpath > - > > Key: HADOOP-13410 > URL: https://issues.apache.org/jira/browse/HADOOP-13410 > Project: Hadoop Common > Issue Type: Bug > Components: util >Reporter: Sangjin Lee >Assignee: Yuanbo Liu > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-13410.001.patch, HADOOP-13410.002.patch > > > Today when you run a "hadoop jar" command, the jar is unzipped to a temporary > location and gets added to the classloader. > However, the original jar itself is still added to the classpath. > {code} > List classPath = new ArrayList<>(); > classPath.add(new File(workDir + "/").toURI().toURL()); > classPath.add(file.toURI().toURL()); > classPath.add(new File(workDir, "classes/").toURI().toURL()); > File[] libs = new File(workDir, "lib").listFiles(); > if (libs != null) { > for (File lib : libs) { > classPath.add(lib.toURI().toURL()); > } > } > {code} > As a result, the contents of the jar are present in the classpath *twice* and > are completely redundant. Although this does not necessarily cause > correctness issues, some stricter code written to require a single presence > of files may fail. > I cannot think of a good reason why the jar should be added to the classpath > if the unjarred content was added to it. I think we should remove the jar > from the classpath. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13410) RunJar adds the content of the jar twice to the classpath
[ https://issues.apache.org/jira/browse/HADOOP-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13410: - Fix Version/s: 3.0.0-alpha1 Looking at git log, this was included in 3.0.0-alpha1 though it looks like the fixversion wasn't set correctly. I'd appreciate if we did addendum patches on another JIRA for tracking purposes. > RunJar adds the content of the jar twice to the classpath > - > > Key: HADOOP-13410 > URL: https://issues.apache.org/jira/browse/HADOOP-13410 > Project: Hadoop Common > Issue Type: Bug > Components: util >Reporter: Sangjin Lee >Assignee: Yuanbo Liu > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-13410.001.patch, HADOOP-13410.002.patch > > > Today when you run a "hadoop jar" command, the jar is unzipped to a temporary > location and gets added to the classloader. > However, the original jar itself is still added to the classpath. > {code} > List classPath = new ArrayList<>(); > classPath.add(new File(workDir + "/").toURI().toURL()); > classPath.add(file.toURI().toURL()); > classPath.add(new File(workDir, "classes/").toURI().toURL()); > File[] libs = new File(workDir, "lib").listFiles(); > if (libs != null) { > for (File lib : libs) { > classPath.add(lib.toURI().toURL()); > } > } > {code} > As a result, the contents of the jar are present in the classpath *twice* and > are completely redundant. Although this does not necessarily cause > correctness issues, some stricter code written to require a single presence > of files may fail. > I cannot think of a good reason why the jar should be added to the classpath > if the unjarred content was added to it. I think we should remove the jar > from the classpath. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13508) FsPermission string constructor does not recognize sticky bit
[ https://issues.apache.org/jira/browse/HADOOP-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623108#comment-15623108 ] Wei-Chiu Chuang commented on HADOOP-13508: -- Is it an incompatible change? I wonder if it make sense to backoprt to branch-2. OIV ReverseXML tool fails to reconstruct a fsimage that has sticky bit because of this bug. > FsPermission string constructor does not recognize sticky bit > - > > Key: HADOOP-13508 > URL: https://issues.apache.org/jira/browse/HADOOP-13508 > Project: Hadoop Common > Issue Type: Bug >Reporter: Atul Sikaria >Assignee: Atul Sikaria > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13508-1.patch, HADOOP-13508-2.patch, > HADOOP-13508.003.patch, HADOOP-13508.004.patch, HADOOP-13508.005.patch, > HADOOP-13508.006.patch > > > FsPermissions's string constructor breaks on valid permission strings, like > "1777". > This is because FsPermission class naïvely uses UmaskParser to do it’s > parsing of permissions: (from source code): > public FsPermission(String mode) { > this((new UmaskParser(mode)).getUMask()); > } > The mode string UMask accepts is subtly different (esp wrt sticky bit), so > parsing Umask is not the same as parsing FsPermission. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623076#comment-15623076 ] Aaron Fabbri commented on HADOOP-13651: --- Great comments, thank you. {quote} Wrap with a LOG.isDebugEnabled() {quote} Good call, will do. I'll roll another patch later today with this. {quote} The is empty dir logic here is getting ugly already. I think it may be best to revisit that entire empty-dir logic {quote} Totally agree. Having a FileStatus which is cacheable (doesn't change based on activity in other files) is sort of a prerequisite for playing nicely with the MetadataStore division of labor. I can reason about the logic here, but the isS3A flag in LocalMetadataStore is a layering violation, IMO. My thought was to work on isEmptyDirectory as a separate effort after S3Guard v1 is merged. I kept the isS3A stuff separated for that reason. I think we may want to revisit the invariants around the empty directory blobs as well. E.g. instead of "exists(empty directory blob) iff directory is empty" the condition would be "exists(empty directory blob) implies there is a directory at that path", which is only necessary if there are no other keys with a matching prefix. I wonder if being lazier about cleaning up those blobs would improve s3a perf, etc. {quote} If, instead of asking of the s3a status, it was just something which could be queried off the metadata store, then it gets to implement the logic behind S3Guard.isEmptyDirectory(metadatastore, s3afilestatus) {quote} MetadataStore may give one of three answers to this question: (1) Yes, that path is an empty directory (2) No, that path is not an empty directory (3) I do not have full state for that directory, I cannot answer. This could be used to do the right thing in the client, but it may take some refactoring. Let me know how you want to tackle this part. I'd vote to defer work on this part to a separate jira because we want to keep other parts of the project moving/integrated, and i think this will be tricky enough to benefit from a separate code review and discussion. The existing solution is bad layering but keeps the semantics of isEmptyDirectory as is for S3A. Also, on TODOs in code in the feature branch. Intent is to commit frequently so folks can keep working in parallel, and that all TODOs will be addressed before merging the feature branch. > S3Guard: S3AFileSystem Integration with MetadataStore > - > > Key: HADOOP-13651 > URL: https://issues.apache.org/jira/browse/HADOOP-13651 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri > Attachments: HADOOP-13651-HADOOP-13345.001.patch, > HADOOP-13651-HADOOP-13345.002.patch, HADOOP-13651-HADOOP-13345.003.patch > > > Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata > consistency and caching. > Implementation should have minimal overhead when no MetadataStore is > configured. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622984#comment-15622984 ] Steve Loughran edited comment on HADOOP-13651 at 10/31/16 6:45 PM: --- comments on patch 003 {{LocalMetadataStore.listChildren(). get(), put}} are going to be *very* expensive as the iterative loop is called even when the logging doesn't take place. Wrap with a LOG.isDebugEnabled(), or have {{DirListingMetadata}} implement a toString() method with all the log info, and just reference that. There's a fairly complex interdependency between S3AFS and the metadatastore (now that LocalMetadataStore is checking for its FS being S3A). The is empty dir logic here is getting ugly already. I think it may be best to revisit that entire empty-dir logic and see how it can be moved into this world, rather than just making things more complex. Currently, there's a couple of main ways the {{isEmptyDirectory}} bit is used * when deciding whether to delete an entry during parent delete walks {{deleteUnnecessaryFakeDirectories}}. This code has already been replaced in trunk. * when validating some operations which only apply to an empty directory. It's essentially being a shortcut to for the predicate "has-no-children", which again, is expensive in s3-land. If, instead of asking of the s3a status, it was just something which could be queried off the metadata store, then it gets to implement the logic behind {{S3Guard.isEmptyDirectory(metadatastore, s3afilestatus)}} The base metadatastore (which would have to be renamed something like DefaultMetadataStore) would implement its check by passthrough from the file status: {code} boolean isEmptyDirectory(S3AFileStatus stat) { return stat.isEmptyDirectory;} {code} Other implementations can actually do a listing. It should be possible to require that there should be no accesses of that flag in the status except through an MD store class. ({{S3AFileStatus}}) is tagged as private/evolving, no external code should be using that field. Finally, now that this starts hooking up to S3, it's going to need to have a security story consistent with S3A. Which is currently: you get R/O or R/W filesystems, as well as filesystems an unauthed user may not read. We expect all FS operations to fail on an unauthed user; if they have read only rights then mkdirs/delete, rename and file writing must all fail, leaving the FS in the same state it was before. Which implies that (a) there will have to be isolation between users and (b) things which update the MD store after, say , "delete", will have to take place after the s3 call succeeds, doing nothing on a failure. That should be testable: try to delete the landsat CSV, verify that it is still there on the next list/stat/open. Do bear in mind that other test infras may not have that file, or supply one in an R/W bucket ( a new complication, given there aren't yet any tests for attempting a write in an R/O bucket). was (Author: ste...@apache.org): comments on patch 003 {{LocalMetadataStore.listChildren(). get(), put}} are going to be *very* expensive as the iterative loop is called even when the logging doesn't take place. Wrap with a LOG.isDebugEnabled(), or have {{DirListingMetadata}} implement a toString() method with all the log info, and just reference that. There's a fairly complex interdependency between S3AFS and the metadatastore (now that LocalMetadataStore is checking for its FS being S3A). The is empty dir logic here is getting ugly already. I think it may be best to revisit that entire empty-dir logic and see how it can be moved into this world, rather than just making things more complex. Currently, there's a couple of main ways the {{isEmptyDirectory}} bit is used * when deciding whether to delete an entry during parent delete walks {{deleteUnnecessaryFakeDirectories}}. This code has already been replaced in trunk. * when validating some operations which only apply to an empty directory. It's essentially being a shortcut to for the predicate "has-no-children", which again, is expensive in s3-land. If, instead of asking of the s3a status, it was just something which could be queried off the metadata store, then it gets to implement the logic. i.e {{S3Guard.isEmptyDirectory(metadatastore, s3afilestatus). The base metadatastore (which would have to be renamed something like DefaultMetadataStore) would implement its check by passthrough from the file status: {{boolean isEmptyDirectory(S3AFileStatus stat) { return stat.isEmptyDirectory;} }}. Other implementations can actually do a listing. It should be possible to require that there should be no accesses of that flag in the status except through an MD store class. ({{S3AFileStatus}}) is tagged as private/evolving, no external code should be using that field. Finally, now that this starts hooking up
[jira] [Updated] (HADOOP-12325) RPC Metrics : Add the ability track and log slow RPCs
[ https://issues.apache.org/jira/browse/HADOOP-12325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HADOOP-12325: --- Resolution: Fixed Fix Version/s: 2.7.4 Status: Resolved (was: Patch Available) I verified test failures and pushed to branch-2.7. > RPC Metrics : Add the ability track and log slow RPCs > - > > Key: HADOOP-12325 > URL: https://issues.apache.org/jira/browse/HADOOP-12325 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc, metrics >Affects Versions: 2.7.1 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1 > > Attachments: Callers of WritableRpcEngine.call.png, > HADOOP-12325-branch-2.7.00.patch, HADOOP-12325.001.patch, > HADOOP-12325.002.patch, HADOOP-12325.003.patch, HADOOP-12325.004.patch, > HADOOP-12325.005.patch, HADOOP-12325.005.test.patch, HADOOP-12325.006.patch > > > This JIRA proposes to add a counter called RpcSlowCalls and also a > configuration setting that allows users to log really slow RPCs. Slow RPCs > are RPCs that fall at 99th percentile. This is useful to troubleshoot why > certain services like name node freezes under heavy load. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623010#comment-15623010 ] Steve Loughran commented on HADOOP-13651: - That test setup would work: one s3aFS using the MD store, one S3AFS going direct, with the direct one manipulating state behind the MD, and allowing for assertions about the state (paths deleted, etc). > S3Guard: S3AFileSystem Integration with MetadataStore > - > > Key: HADOOP-13651 > URL: https://issues.apache.org/jira/browse/HADOOP-13651 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri > Attachments: HADOOP-13651-HADOOP-13345.001.patch, > HADOOP-13651-HADOOP-13345.002.patch, HADOOP-13651-HADOOP-13345.003.patch > > > Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata > consistency and caching. > Implementation should have minimal overhead when no MetadataStore is > configured. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622984#comment-15622984 ] Steve Loughran commented on HADOOP-13651: - comments on patch 003 {{LocalMetadataStore.listChildren(). get(), put}} are going to be *very* expensive as the iterative loop is called even when the logging doesn't take place. Wrap with a LOG.isDebugEnabled(), or have {{DirListingMetadata}} implement a toString() method with all the log info, and just reference that. There's a fairly complex interdependency between S3AFS and the metadatastore (now that LocalMetadataStore is checking for its FS being S3A). The is empty dir logic here is getting ugly already. I think it may be best to revisit that entire empty-dir logic and see how it can be moved into this world, rather than just making things more complex. Currently, there's a couple of main ways the {{isEmptyDirectory}} bit is used * when deciding whether to delete an entry during parent delete walks {{deleteUnnecessaryFakeDirectories}}. This code has already been replaced in trunk. * when validating some operations which only apply to an empty directory. It's essentially being a shortcut to for the predicate "has-no-children", which again, is expensive in s3-land. If, instead of asking of the s3a status, it was just something which could be queried off the metadata store, then it gets to implement the logic. i.e {{S3Guard.isEmptyDirectory(metadatastore, s3afilestatus). The base metadatastore (which would have to be renamed something like DefaultMetadataStore) would implement its check by passthrough from the file status: {{boolean isEmptyDirectory(S3AFileStatus stat) { return stat.isEmptyDirectory;} }}. Other implementations can actually do a listing. It should be possible to require that there should be no accesses of that flag in the status except through an MD store class. ({{S3AFileStatus}}) is tagged as private/evolving, no external code should be using that field. Finally, now that this starts hooking up to S3, it's going to need to have a security story consistent with S3A. Which is currently: you get R/O or R/W filesystems, as well as filesystems an unauthed user may not read. We expect all FS operations to fail on an unauthed user; if they have read only rights then mkdirs/delete, rename and file writing must all fail, leaving the FS in the same state it was before. Which implies that (a) there will have to be isolation between users and (b) things which update the MD store after, say , "delete", will have to take place after the s3 call succeeds, doing nothing on a failure. That should be testable: try to delete the landsat CSV, verify that it is still there on the next list/stat/open. Do bear in mind that other test infras may not have that file, or supply one in an R/W bucket ( a new complication, given there aren't yet any tests for attempting a write in an R/O bucket). > S3Guard: S3AFileSystem Integration with MetadataStore > - > > Key: HADOOP-13651 > URL: https://issues.apache.org/jira/browse/HADOOP-13651 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri > Attachments: HADOOP-13651-HADOOP-13345.001.patch, > HADOOP-13651-HADOOP-13345.002.patch, HADOOP-13651-HADOOP-13345.003.patch > > > Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata > consistency and caching. > Implementation should have minimal overhead when no MetadataStore is > configured. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622969#comment-15622969 ] Lei (Eddy) Xu commented on HADOOP-13651: To make the test deterministic, I was thinking that we should populate the metadata store and a *testing* file system separately (i.e., files are in metadata store , but are not in the testing file system, to simulate list after create). And similarly for the listing after delete scenarios. Regarding the prove of algorithms, I found that, it is easy to prove that metadata store can detect the inconsistency between itself and s3, it was hard to me to prove that what is the actual cause of the inconsistency between metadata store and S3, within the context of S3AFileSystem. > S3Guard: S3AFileSystem Integration with MetadataStore > - > > Key: HADOOP-13651 > URL: https://issues.apache.org/jira/browse/HADOOP-13651 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri > Attachments: HADOOP-13651-HADOOP-13345.001.patch, > HADOOP-13651-HADOOP-13345.002.patch, HADOOP-13651-HADOOP-13345.003.patch > > > Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata > consistency and caching. > Implementation should have minimal overhead when no MetadataStore is > configured. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-10597) RPC Server signals backoff to clients when all request queues are full
[ https://issues.apache.org/jira/browse/HADOOP-10597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HADOOP-10597: --- Resolution: Fixed Status: Resolved (was: Patch Available) > RPC Server signals backoff to clients when all request queues are full > -- > > Key: HADOOP-10597 > URL: https://issues.apache.org/jira/browse/HADOOP-10597 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ming Ma >Assignee: Ming Ma > Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1 > > Attachments: HADOOP-10597-2.patch, HADOOP-10597-3.patch, > HADOOP-10597-4.patch, HADOOP-10597-5.patch, HADOOP-10597-6.patch, > HADOOP-10597-branch-2.7.patch, HADOOP-10597.patch, > MoreRPCClientBackoffEvaluation.pdf, RPCClientBackoffDesignAndEvaluation.pdf > > > Currently if an application hits NN too hard, RPC requests be in blocking > state, assuming OS connection doesn't run out. Alternatively RPC or NN can > throw some well defined exception back to the client based on certain > policies when it is under heavy load; client will understand such exception > and do exponential back off, as another implementation of > RetryInvocationHandler. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-10597) RPC Server signals backoff to clients when all request queues are full
[ https://issues.apache.org/jira/browse/HADOOP-10597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HADOOP-10597: --- Fix Version/s: 2.7.4 Thanks Ming for confirming this. I verified reported test failures (cannot reproduce locally) and pushed to branch-2.7. > RPC Server signals backoff to clients when all request queues are full > -- > > Key: HADOOP-10597 > URL: https://issues.apache.org/jira/browse/HADOOP-10597 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ming Ma >Assignee: Ming Ma > Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1 > > Attachments: HADOOP-10597-2.patch, HADOOP-10597-3.patch, > HADOOP-10597-4.patch, HADOOP-10597-5.patch, HADOOP-10597-6.patch, > HADOOP-10597-branch-2.7.patch, HADOOP-10597.patch, > MoreRPCClientBackoffEvaluation.pdf, RPCClientBackoffDesignAndEvaluation.pdf > > > Currently if an application hits NN too hard, RPC requests be in blocking > state, assuming OS connection doesn't run out. Alternatively RPC or NN can > throw some well defined exception back to the client based on certain > policies when it is under heavy load; client will understand such exception > and do exponential back off, as another implementation of > RetryInvocationHandler. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622912#comment-15622912 ] Steve Loughran commented on HADOOP-13651: - +1 with an instance per FS instance. S3A FS instances are becoming more expensive; with a thread pool for uploads, soon one for copy and metadata operations; adding a metadata store may make them more expensive. But * an instance with no metadata is no more expensive than now * when all filesystems for a user are released, their resources get cleaned up. This matters in things like hive, which call {{FileSystem.closeAllForUGI(ugi)}} to release the resources after fielding a user's request. If there is trouble, and it's important to be ready for, is that if two users connect to the same bucket in separate RPC calls, they are going to end up with separate FS instances, hence separate MD stores. When using dynamo backed stores it's (probably) moot, but for local stores, it's going to complicate things. If one caller modifies the state, the other will not pick it up. But if you shared the store, then a user without write permission may be able to manipulate the metadata seen by the other (at least if a delete() goes through on the MD before the FS permissions are checked) This raises another question: what does happen with security here? > S3Guard: S3AFileSystem Integration with MetadataStore > - > > Key: HADOOP-13651 > URL: https://issues.apache.org/jira/browse/HADOOP-13651 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri > Attachments: HADOOP-13651-HADOOP-13345.001.patch, > HADOOP-13651-HADOOP-13345.002.patch, HADOOP-13651-HADOOP-13345.003.patch > > > Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata > consistency and caching. > Implementation should have minimal overhead when no MetadataStore is > configured. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622860#comment-15622860 ] Steve Loughran commented on HADOOP-13651: - bq. can not reliability test eventual consistency oh, it's worse than that: it's very, very hard to trigger consistency problems today. Writing tests for the newer stuff is going to be even harder. Hopefully someone can prove their algorithms work. Any volunteers? > S3Guard: S3AFileSystem Integration with MetadataStore > - > > Key: HADOOP-13651 > URL: https://issues.apache.org/jira/browse/HADOOP-13651 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri > Attachments: HADOOP-13651-HADOOP-13345.001.patch, > HADOOP-13651-HADOOP-13345.002.patch, HADOOP-13651-HADOOP-13345.003.patch > > > Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata > consistency and caching. > Implementation should have minimal overhead when no MetadataStore is > configured. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12705) Upgrade Jackson 2.2.3 to 2.5.3 or later
[ https://issues.apache.org/jira/browse/HADOOP-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622794#comment-15622794 ] Steve Loughran commented on HADOOP-12705: - I do think it's time to upgrade ... but at the same time, if we push ahead of what everything else is using, there are going to be problems. Has anyone raised the upgrade with projects downstream (HBase etc?). What versions are they using? > Upgrade Jackson 2.2.3 to 2.5.3 or later > --- > > Key: HADOOP-12705 > URL: https://issues.apache.org/jira/browse/HADOOP-12705 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Affects Versions: 2.8.0 >Reporter: Steve Loughran > Attachments: HADOOP-12705.002.patch, HADOOP-12705.01.patch, > HADOOP-13050-001.patch > > > There's no rush to do this; this is just the JIRA to track versions. However, > without the upgrade, things written for Jackson 2.4.4 can break ( SPARK-12807) > being Jackson, this is a potentially dangerous update. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.
[ https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622752#comment-15622752 ] Steve Loughran commented on HADOOP-13449: - regarding create() on a path. S3A isn't strict enough (HADOOP-13321) . It checks the dest path is there and not a directory, but doesn't go up the tree. It should. we know it should, but we also know how much slower things would be. I'm hoping to make this something done asynchronously between creating the file and actually committing the write in the final close(). That is, fail later, rather than sooner —but do fail before anything is materialized (HADOOP-13654). > S3Guard: Implement DynamoDBMetadataStore. > - > > Key: HADOOP-13449 > URL: https://issues.apache.org/jira/browse/HADOOP-13449 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Attachments: HADOOP-13449-HADOOP-13345.000.patch, > HADOOP-13449-HADOOP-13345.001.patch > > > Provide an implementation of the metadata store backed by DynamoDB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.
[ https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622747#comment-15622747 ] Steve Loughran commented on HADOOP-13449: - I'm just catching up with this, apologies if I say things that are clearly wrong to anyone who knows the code or its history: I don't, yet. h2. Build # [~cnauroth] I see your point about declaring the dependency; you are correct. It does need to be something published for downstream users. # I do still want the AWS update to be a standalone patch, and with a matching Jackson update. Those can perhaps be done to trunk/ itself, and merged in here, so that any/all other trunk work will be with the upgraded artifacts. h2. Code Anything in source marked TODO scares me. There's a lot here. Presumably the plan is to have them addressed by the time the patch goes in? Or at least pulled out into explicit followup JIRAs? h3. {{DynamoDBMetadataStore}} * just use .* on the static imports of the s3a constant, util, PathMetadataDynamoDBTranslation entries * L108: no need to mix {{@code}} and {{}} tags. For multiline, {{}} should suffice ... check with the generated javadocs to see * L218: that endpoint map/convert logic should be pulled into a static s3a util method, with tests. * L465: what if close() is called twice? If a re-entrant call is made? * L492, 531: Throw {{InterruptedIOException}}, or set the thread's interrupted bit again. We don't want that thread interrupt to be lost if at all possible. * Most of those info-level per-operation logs should be at Debug * Operation param to calls of {{translateException}} could be more informative. Consider: what info would you need there in order to debug this from the logs. h2. Tests As well as the unit tests, I need to be able to run the entire existing suite with s3guard enabled. This could be done with a new maven profile which would enable it, or simply a property passed down through the build. That's what's done in the scale tests in trunk, using methods in {{S3ATestUtils}} to allow a maven-defined property to override one in the core-site.xml, allowing you to enable it permanently in your {{test/resources/auth-keys.xml}} reference, or via maven. I see that the tests are using java 8 language features. That is going to make backporting to branch-2 in future harder. Is everyone happy there (i.e. willing to do the effort to downgrade the code if the need arises)? h3. {{MetadataStoreTestBase}} * L234: I know it's not in this patch, but I think the path name should be changed to something else. h3. {{TestDynamoDBMetadataStore}} I'll need to spend some time looking/playing with this. * There's an inevitable risk the native libs aren't around/going to work with the native OS running the build. What policy is good there? Fail? or downgrade to skip? It's probably easiest to leave it as it is now (fail) and see what needs to change as/when failures surface. * Add {{S3AFS.close()}} call in {{tearDownAfterClass}} just to make sure threads all get cleaned up. > S3Guard: Implement DynamoDBMetadataStore. > - > > Key: HADOOP-13449 > URL: https://issues.apache.org/jira/browse/HADOOP-13449 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Attachments: HADOOP-13449-HADOOP-13345.000.patch, > HADOOP-13449-HADOOP-13345.001.patch > > > Provide an implementation of the metadata store backed by DynamoDB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-9424) The "hadoop jar" invocation should include the passed jar on the classpath as a whole
[ https://issues.apache.org/jira/browse/HADOOP-9424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated HADOOP-9424: Labels: (was: BB2015-05-TBR) > The "hadoop jar" invocation should include the passed jar on the classpath as > a whole > - > > Key: HADOOP-9424 > URL: https://issues.apache.org/jira/browse/HADOOP-9424 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 2.0.3-alpha >Reporter: Harsh J >Assignee: Harsh J >Priority: Minor > Attachments: HADOOP-9424.patch > > > When you have a case such as this: > {{X.jar -> Classes = Main, Foo}} > {{Y.jar -> Classes = Bar}} > With implementation details such as: > * Main references Bar and invokes a public, static method on it. > * Bar does a class lookup to find Foo (Class.forName("Foo")). > Then when you do a {{HADOOP_CLASSPATH=Y.jar hadoop jar X.jar Main}}, the > Bar's method fails with a ClassNotFound exception cause of the way RunJar > runs. > RunJar extracts the passed jar and includes its contents on the ClassLoader > of its current thread but the {{Class.forName(…)}} call from another class > does not check that class loader and hence cannot find the class as its not > on any classpath it is aware of. > The script of "hadoop jar" should ideally include the passed jar argument to > the CLASSPATH before RunJar is invoked, for this above case to pass. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-9424) The "hadoop jar" invocation should include the passed jar on the classpath as a whole
[ https://issues.apache.org/jira/browse/HADOOP-9424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee reopened HADOOP-9424: - I would keep this issue open as it still is a real issue. Let's find a way to address this in the best way possible. It is not as urgent as other issues, so we can take some time to think it through. > The "hadoop jar" invocation should include the passed jar on the classpath as > a whole > - > > Key: HADOOP-9424 > URL: https://issues.apache.org/jira/browse/HADOOP-9424 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 2.0.3-alpha >Reporter: Harsh J >Assignee: Harsh J >Priority: Minor > Attachments: HADOOP-9424.patch > > > When you have a case such as this: > {{X.jar -> Classes = Main, Foo}} > {{Y.jar -> Classes = Bar}} > With implementation details such as: > * Main references Bar and invokes a public, static method on it. > * Bar does a class lookup to find Foo (Class.forName("Foo")). > Then when you do a {{HADOOP_CLASSPATH=Y.jar hadoop jar X.jar Main}}, the > Bar's method fails with a ClassNotFound exception cause of the way RunJar > runs. > RunJar extracts the passed jar and includes its contents on the ClassLoader > of its current thread but the {{Class.forName(…)}} call from another class > does not check that class loader and hence cannot find the class as its not > on any classpath it is aware of. > The script of "hadoop jar" should ideally include the passed jar argument to > the CLASSPATH before RunJar is invoked, for this above case to pass. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-13772) ITestS3AContractRootDir still playing up, bug in eventually() retry logic?
[ https://issues.apache.org/jira/browse/HADOOP-13772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-13772. - Resolution: Duplicate Just saw that HADOOP-13713 was still open; this is a duplicate of that. Changes in the test runner which should have fix it, clearly, haven't > ITestS3AContractRootDir still playing up, bug in eventually() retry logic? > -- > > Key: HADOOP-13772 > URL: https://issues.apache.org/jira/browse/HADOOP-13772 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > > Just got a transient failure in, {{ITestS3AContractRootDir}}, one which > should have been handled by retrying; maybe the {{LambdaTestUtils}} code > isn't doing its job right. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13680) fs.s3a.readahead.range to use getLongBytes
[ https://issues.apache.org/jira/browse/HADOOP-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622695#comment-15622695 ] Steve Loughran commented on HADOOP-13680: - checkstyle is line length: {code} ./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:189: readAhead = longByteOption(conf, READAHEAD_RANGE, DEFAULT_READAHEAD_RANGE, 0);: Line is longer than 80 characters (found 84). {code} I'll fix that on the commit. Local test verified this BTW, S3 Ireland, with the -Pscale profile for scale tests too, as these play with the numbers. > fs.s3a.readahead.range to use getLongBytes > -- > > Key: HADOOP-13680 > URL: https://issues.apache.org/jira/browse/HADOOP-13680 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Abhishek Modi > Attachments: HADOOP-13680-branch-2-004.patch, HADOOP-13680.001.patch > > > The {{fs.s3a.readahead.range}} value is measured in bytes, but can be > hundreds of KB. Easier to use getLongBytes and set to things like "300k" > This will be backwards compatible with the existing settings if anyone is > using them, because the no-prefix default will still be bytes -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13410) RunJar adds the content of the jar twice to the classpath
[ https://issues.apache.org/jira/browse/HADOOP-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622683#comment-15622683 ] Sangjin Lee commented on HADOOP-13410: -- The unit test failure is unrelated. I would greatly appreciate feedback. Thanks! > RunJar adds the content of the jar twice to the classpath > - > > Key: HADOOP-13410 > URL: https://issues.apache.org/jira/browse/HADOOP-13410 > Project: Hadoop Common > Issue Type: Bug > Components: util >Reporter: Sangjin Lee >Assignee: Yuanbo Liu > Attachments: HADOOP-13410.001.patch, HADOOP-13410.002.patch > > > Today when you run a "hadoop jar" command, the jar is unzipped to a temporary > location and gets added to the classloader. > However, the original jar itself is still added to the classpath. > {code} > List classPath = new ArrayList<>(); > classPath.add(new File(workDir + "/").toURI().toURL()); > classPath.add(file.toURI().toURL()); > classPath.add(new File(workDir, "classes/").toURI().toURL()); > File[] libs = new File(workDir, "lib").listFiles(); > if (libs != null) { > for (File lib : libs) { > classPath.add(lib.toURI().toURL()); > } > } > {code} > As a result, the contents of the jar are present in the classpath *twice* and > are completely redundant. Although this does not necessarily cause > correctness issues, some stricter code written to require a single presence > of files may fail. > I cannot think of a good reason why the jar should be added to the classpath > if the unjarred content was added to it. I think we should remove the jar > from the classpath. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13680) fs.s3a.readahead.range to use getLongBytes
[ https://issues.apache.org/jira/browse/HADOOP-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622439#comment-15622439 ] Hadoop QA commented on HADOOP-13680: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 52s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 6s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} branch-2 passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 23s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 36s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} branch-2 passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 11s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 32m 54s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:b59b8b7 | | JIRA Issue | HADOOP-13680 | | GITHUB PR | https://github.com/apache/hadoop/pull/145 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux afa7e5528859 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | branch-2 / e4023f0 | | Default Java | 1.7.0_111 | | Multi-JDK
[jira] [Commented] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622398#comment-15622398 ] ASF GitHub Bot commented on HADOOP-13773: - GitHub user ferhui opened a pull request: https://github.com/apache/hadoop/pull/150 HADOOP-13773, set heap args for HADOOP_CLIENT_OPTS when HADOOP_HEAPSI… jira url is https://issues.apache.org/jira/browse/HADOOP-13773 You can merge this pull request into a Git repository by running: $ git pull https://github.com/ferhui/hadoop branch-2 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hadoop/pull/150.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #150 commit 1117baf860bf1ddbe3fc18a58bf576e986e3be21 Author: ferhuiDate: 2016-10-31T14:52:20Z HADOOP-13773, set heap args for HADOOP_CLIENT_OPTS when HADOOP_HEAPSIZE is empty > wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2 > --- > > Key: HADOOP-13773 > URL: https://issues.apache.org/jira/browse/HADOOP-13773 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 2.6.1, 2.7.3 >Reporter: Fei Hui > Fix For: 2.8.0, 2.9.0, 2.7.4 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622355#comment-15622355 ] Fei Hui commented on HADOOP-13773: -- it affects branch-2, i will submit a pull request on github > wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2 > --- > > Key: HADOOP-13773 > URL: https://issues.apache.org/jira/browse/HADOOP-13773 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 2.6.1, 2.7.3 >Reporter: Fei Hui > Fix For: 2.8.0, 2.9.0, 2.7.4 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622349#comment-15622349 ] Fei Hui commented on HADOOP-13773: -- in conf/hadoop-env.sh, export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS" when i set HADOOP_HEAPSIZE ,and run 'hadoop jar ...', jvm args is not work. i see, in bin/hadoop, exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@" HADOOP_OPTS is behind JAVA_HEAP_MAX, so HADOOP_HEAPSIZE is not work. for example i run 'HADOOP_HEAPSIZE=1024 hadoop jar ...' , the java process is 'java -Xmx1024m ... -Xmx512m...', then Xmx512m is valid, and Xmx1024m is invalid > wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2 > --- > > Key: HADOOP-13773 > URL: https://issues.apache.org/jira/browse/HADOOP-13773 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 2.6.1, 2.7.3 >Reporter: Fei Hui > Fix For: 2.8.0, 2.9.0, 2.7.4 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2
Fei Hui created HADOOP-13773: Summary: wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2 Key: HADOOP-13773 URL: https://issues.apache.org/jira/browse/HADOOP-13773 Project: Hadoop Common Issue Type: Bug Components: conf Affects Versions: 2.7.3, 2.6.1 Reporter: Fei Hui Fix For: 2.8.0, 2.9.0, 2.7.4 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13680) fs.s3a.readahead.range to use getLongBytes
[ https://issues.apache.org/jira/browse/HADOOP-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13680: Status: Patch Available (was: Open) > fs.s3a.readahead.range to use getLongBytes > -- > > Key: HADOOP-13680 > URL: https://issues.apache.org/jira/browse/HADOOP-13680 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Abhishek Modi > Attachments: HADOOP-13680-branch-2-004.patch, HADOOP-13680.001.patch > > > The {{fs.s3a.readahead.range}} value is measured in bytes, but can be > hundreds of KB. Easier to use getLongBytes and set to things like "300k" > This will be backwards compatible with the existing settings if anyone is > using them, because the no-prefix default will still be bytes -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13680) fs.s3a.readahead.range to use getLongBytes
[ https://issues.apache.org/jira/browse/HADOOP-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13680: Attachment: HADOOP-13680-branch-2-004.patch Patch 004 (trying to keep numbers higher). This is Abishek's patch with some slight changes; the one I'll be +1'ing if Yetus is happy * cover acceptable suffixes in doc elements of all associated options, in core-defaults and s3/index.md * swap order of expected/actual in {{assertEquals()}}, as the exception generated on a failure reports parameter 1 as the "expected" value, param 2 as actual. > fs.s3a.readahead.range to use getLongBytes > -- > > Key: HADOOP-13680 > URL: https://issues.apache.org/jira/browse/HADOOP-13680 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Abhishek Modi > Attachments: HADOOP-13680-branch-2-004.patch, HADOOP-13680.001.patch > > > The {{fs.s3a.readahead.range}} value is measured in bytes, but can be > hundreds of KB. Easier to use getLongBytes and set to things like "300k" > This will be backwards compatible with the existing settings if anyone is > using them, because the no-prefix default will still be bytes -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13680) fs.s3a.readahead.range to use getLongBytes
[ https://issues.apache.org/jira/browse/HADOOP-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13680: Status: Open (was: Patch Available) > fs.s3a.readahead.range to use getLongBytes > -- > > Key: HADOOP-13680 > URL: https://issues.apache.org/jira/browse/HADOOP-13680 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Abhishek Modi > Attachments: HADOOP-13680.001.patch > > > The {{fs.s3a.readahead.range}} value is measured in bytes, but can be > hundreds of KB. Easier to use getLongBytes and set to things like "300k" > This will be backwards compatible with the existing settings if anyone is > using them, because the no-prefix default will still be bytes -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13772) ITestS3AContractRootDir still playing up, bug in eventually() retry logic?
[ https://issues.apache.org/jira/browse/HADOOP-13772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622293#comment-15622293 ] Steve Loughran commented on HADOOP-13772: - Final stack. The thing to see is that the trace says "after 1 attempts": there's only been one iteration through there should have been more than one {code} java.lang.AssertionError: After 1 attempts: listing after rm /* not empty final [00] S3AFileStatus{path=s3a://hwdev-steve-ireland-new/fork-1; isDirectory=true; modification_time=0; access_time=0; owner=stevel; group=stevel; permission=rwxrwxrwx; isSymlink=false} isEmptyDirectory=false deleted [00] S3AFileStatus{path=s3a://hwdev-steve-ireland-new/Users; isDirectory=true; modification_time=0; access_time=0; owner=stevel; group=stevel; permission=rwxrwxrwx; isSymlink=false} isEmptyDirectory=false [01] S3AFileStatus{path=s3a://hwdev-steve-ireland-new/fork-1; isDirectory=true; modification_time=0; access_time=0; owner=stevel; group=stevel; permission=rwxrwxrwx; isSymlink=false} isEmptyDirectory=false original [00] S3AFileStatus{path=s3a://hwdev-steve-ireland-new/Users; isDirectory=true; modification_time=0; access_time=0; owner=stevel; group=stevel; permission=rwxrwxrwx; isSymlink=false} isEmptyDirectory=false [01] S3AFileStatus{path=s3a://hwdev-steve-ireland-new/fork-1; isDirectory=true; modification_time=0; access_time=0; owner=stevel; group=stevel; permission=rwxrwxrwx; isSymlink=false} isEmptyDirectory=false at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest$1.call(AbstractContractRootDirectoryTest.java:103) at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest$1.call(AbstractContractRootDirectoryTest.java:97) at org.apache.hadoop.test.LambdaTestUtils.eventually(LambdaTestUtils.java:234) at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmEmptyRootDirNonRecursive(AbstractContractRootDirectoryTest.java:95) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} > ITestS3AContractRootDir still playing up, bug in eventually() retry logic? > -- > > Key: HADOOP-13772 > URL: https://issues.apache.org/jira/browse/HADOOP-13772 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > > Just got a transient failure in, {{ITestS3AContractRootDir}}, one which > should have been handled by retrying; maybe the {{LambdaTestUtils}} code > isn't doing its job right. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13772) ITestS3AContractRootDir still playing up, bug in eventually() retry logic?
Steve Loughran created HADOOP-13772: --- Summary: ITestS3AContractRootDir still playing up, bug in eventually() retry logic? Key: HADOOP-13772 URL: https://issues.apache.org/jira/browse/HADOOP-13772 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3, test Affects Versions: 2.9.0 Reporter: Steve Loughran Assignee: Steve Loughran Just got a transient failure in, {{ITestS3AContractRootDir}}, one which should have been handled by retrying; maybe the {{LambdaTestUtils}} code isn't doing its job right. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11614) Remove httpclient dependency from hadoop-openstack
[ https://issues.apache.org/jira/browse/HADOOP-11614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15621927#comment-15621927 ] Hadoop QA commented on HADOOP-11614: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 10s{color} | {color:red} hadoop-tools_hadoop-openstack generated 7 new + 7 unchanged - 0 fixed = 14 total (was 7) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 12s{color} | {color:orange} hadoop-tools/hadoop-openstack: The patch generated 48 new + 174 unchanged - 85 fixed = 222 total (was 259) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 30s{color} | {color:red} hadoop-tools/hadoop-openstack generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 11s{color} | {color:green} hadoop-openstack in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 54s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-tools/hadoop-openstack | | | Possible null pointer dereference of resp in org.apache.hadoop.fs.swift.http.SwiftRestClient.perform(String, URI, SwiftRestClient$HttpRequestProcessor) on exception path Dereferenced at SwiftRestClient.java:resp in org.apache.hadoop.fs.swift.http.SwiftRestClient.perform(String, URI, SwiftRestClient$HttpRequestProcessor) on exception path Dereferenced at SwiftRestClient.java:[line 1423] | | | Load of known null value in org.apache.hadoop.fs.swift.util.HttpResponseUtils.getResponseBody(HttpResponse) At HttpResponseUtils.java:in org.apache.hadoop.fs.swift.util.HttpResponseUtils.getResponseBody(HttpResponse) At HttpResponseUtils.java:[line 71] | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-11614 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12836136/HADOOP-11614-003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle | | uname | Linux 681ad83bd1a2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Updated] (HADOOP-11614) Remove httpclient dependency from hadoop-openstack
[ https://issues.apache.org/jira/browse/HADOOP-11614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-11614: --- Attachment: HADOOP-11614-003.patch 003 patch: Rebased > Remove httpclient dependency from hadoop-openstack > -- > > Key: HADOOP-11614 > URL: https://issues.apache.org/jira/browse/HADOOP-11614 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Brahma Reddy Battula >Priority: Blocker > Attachments: HADOOP-11614-002.patch, HADOOP-11614-003.patch, > HADOOP-11614.patch > > > Remove httpclient dependency from hadoop-openstack and its pom.xml file. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13603) Remove package line length checkstyle rule
[ https://issues.apache.org/jira/browse/HADOOP-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15621733#comment-15621733 ] Akira Ajisaka commented on HADOOP-13603: LGTM +1, upgrading checkstyle version can be separated. > Remove package line length checkstyle rule > -- > > Key: HADOOP-13603 > URL: https://issues.apache.org/jira/browse/HADOOP-13603 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Shane Kumpf >Assignee: Shane Kumpf > Attachments: HADOOP-13603.001.patch > > > The packages related to the DockerLinuxContainerRuntime all exceed the 80 > char line length limit enforced by checkstyle. This causes every build to > fail with a -1. I would like to exclude this rule from causing a failure. > {code} > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerCommandExecutor.java:17:package > > org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;: > Line is longer than 80 characters (found 88). > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerContainerStatusHandler.java:17:package > > org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;: > Line is longer than 80 characters (found 88). > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/package-info.java:23:package > > org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;: > Line is longer than 80 characters (found 88). > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/MockPrivilegedOperationCaptor.java:17:package > > org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged;: > Line is longer than 80 characters (found 84). > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerRuntimeTestingUtils.java:17:package > org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime;: > Line is longer than 80 characters (found 81). > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/MockDockerContainerStatusHandler.java:17:package > > org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;: > Line is longer than 80 characters (found 88). > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerCommandExecutor.java:17:package > > org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;: > Line is longer than 80 characters (found 88). > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerContainerStatusHandler.java:17:package > > org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;: > Line is longer than 80 characters (found 88). > {code} > Alternatively, we could look to restructure the packages here, but I question > what value this check really provides. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13767) Aliyun Connection broken when idle then 1 minutes or build than 3 hours
[ https://issues.apache.org/jira/browse/HADOOP-13767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15621455#comment-15621455 ] Genmao Yu commented on HADOOP-13767: This issue is caused by the Aliyun OSS protection mechanisms, i.e. Aliyun OSS will close idle( than 1 min) connection. If business logic is consuming data very slowly, connection will be closed unexpectedly. {{Retry}} is a way to solve this issue. Thanks for your suggestions. > Aliyun Connection broken when idle then 1 minutes or build than 3 hours > --- > > Key: HADOOP-13767 > URL: https://issues.apache.org/jira/browse/HADOOP-13767 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/oss >Affects Versions: 3.0.0-alpha2 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: 3.0.0-alpha2 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org