[jira] [Comment Edited] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1
[ https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245315#comment-16245315 ] Akira Ajisaka edited comment on HADOOP-13514 at 11/9/17 7:45 AM: - The latest run seems fine. Very unpredictable. https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/584/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt I'd like to get heap dump on OOM and analyze it, however, I can't reproduce the issue locally. OOM does not occur in my environment. was (Author: ajisakaa): The latest run seems fine. Very unpredictable. https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/584/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt I'd like to get heap dump on and analyze it, however, I can't reproduce the issue locally. OOM does not occur in my environment. > Upgrade maven surefire plugin to 2.19.1 > --- > > Key: HADOOP-13514 > URL: https://issues.apache.org/jira/browse/HADOOP-13514 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.8.0 >Reporter: Ewan Higgs >Assignee: Akira Ajisaka > Attachments: HADOOP-13514-addendum.01.patch, > HADOOP-13514-testing.001.patch, HADOOP-13514-testing.002.patch, > HADOOP-13514-testing.003.patch, HADOOP-13514-testing.004.patch, > HADOOP-13514-testing.005.patch, HADOOP-13514.002.patch, > HADOOP-13514.003.patch, HADOOP-13514.004.patch, HADOOP-13514.005.patch, > HADOOP-13514.006.patch, surefire-2.19.patch > > > A lot of people working on Hadoop don't want to run all the tests when they > develop; only the bits they're working on. Surefire 2.19 introduced more > useful test filters which let us run a subset of the tests that brings the > build time down from 'come back tomorrow' to 'grab a coffee'. > For instance, if I only care about the S3 adaptor, I might run: > {code} > mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true > \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, > org.apache.hadoop.fs.s3a.*\" > {code} > We can work around this by specifying the surefire version on the command > line but it would be better, imo, to just update the default surefire used. > {code} > mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true > \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, > org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1
[ https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245315#comment-16245315 ] Akira Ajisaka commented on HADOOP-13514: The latest run seems fine. Very unpredictable. https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/584/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt I'd like to get heap dump on and analyze it, however, I can't reproduce the issue locally. OOM does not occur in my environment. > Upgrade maven surefire plugin to 2.19.1 > --- > > Key: HADOOP-13514 > URL: https://issues.apache.org/jira/browse/HADOOP-13514 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.8.0 >Reporter: Ewan Higgs >Assignee: Akira Ajisaka > Attachments: HADOOP-13514-addendum.01.patch, > HADOOP-13514-testing.001.patch, HADOOP-13514-testing.002.patch, > HADOOP-13514-testing.003.patch, HADOOP-13514-testing.004.patch, > HADOOP-13514-testing.005.patch, HADOOP-13514.002.patch, > HADOOP-13514.003.patch, HADOOP-13514.004.patch, HADOOP-13514.005.patch, > HADOOP-13514.006.patch, surefire-2.19.patch > > > A lot of people working on Hadoop don't want to run all the tests when they > develop; only the bits they're working on. Surefire 2.19 introduced more > useful test filters which let us run a subset of the tests that brings the > build time down from 'come back tomorrow' to 'grab a coffee'. > For instance, if I only care about the S3 adaptor, I might run: > {code} > mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true > \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, > org.apache.hadoop.fs.s3a.*\" > {code} > We can work around this by specifying the surefire version on the command > line but it would be better, imo, to just update the default surefire used. > {code} > mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true > \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, > org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15003) Merge S3A committers into trunk: Yetus patch checker
[ https://issues.apache.org/jira/browse/HADOOP-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245303#comment-16245303 ] Aaron Fabbri commented on HADOOP-15003: --- Ran through the parallel integration tests about 5 times today without a reproduction. Heisenbug. Wrote a script to run it over and over and save logs overnight. Will shout if I find anything in the morning. Friday we are off on holiday FYI. > Merge S3A committers into trunk: Yetus patch checker > > > Key: HADOOP-15003 > URL: https://issues.apache.org/jira/browse/HADOOP-15003 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13786-041.patch, HADOOP-13786-042.patch, > HADOOP-13786-043.patch, HADOOP-13786-044.patch, HADOOP-13786-045.patch, > HADOOP-13786-046.patch > > > This is a Yetus only JIRA created to have Yetus review the > HADOOP-13786/HADOOP-14971 patch as a .patch file, as the review PR > [https://github.com/apache/hadoop/pull/282] is stopping this happening in > HADOOP-14971. > Reviews should go into the PR/other task -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14964) AliyunOSS: backport HADOOP-12756 to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245304#comment-16245304 ] Kai Zheng commented on HADOOP-14964: Thanks Sammi for the patch! 1. Did you have some run on your local environment? If so, would you please post the local test results? 2. For 2.7 branch, I wonder if we could use previous OSS sdk version that matches some appropriate httpclient library. Would you do some check? Thanks! > AliyunOSS: backport HADOOP-12756 to branch-2 > > > Key: HADOOP-14964 > URL: https://issues.apache.org/jira/browse/HADOOP-14964 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Reporter: Genmao Yu >Assignee: Genmao Yu > Attachments: HADOOP-14964-branch-2.000.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1
[ https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245297#comment-16245297 ] Allen Wittenauer commented on HADOOP-13514: --- The hdfs unit tests are wildly unpredictable and no really seems to care enough to fix them. Here's the full run of trunk from two days ago: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/583/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Lots of OOMs there too. branch-2 is several times worse. > Upgrade maven surefire plugin to 2.19.1 > --- > > Key: HADOOP-13514 > URL: https://issues.apache.org/jira/browse/HADOOP-13514 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.8.0 >Reporter: Ewan Higgs >Assignee: Akira Ajisaka > Attachments: HADOOP-13514-addendum.01.patch, > HADOOP-13514-testing.001.patch, HADOOP-13514-testing.002.patch, > HADOOP-13514-testing.003.patch, HADOOP-13514-testing.004.patch, > HADOOP-13514-testing.005.patch, HADOOP-13514.002.patch, > HADOOP-13514.003.patch, HADOOP-13514.004.patch, HADOOP-13514.005.patch, > HADOOP-13514.006.patch, surefire-2.19.patch > > > A lot of people working on Hadoop don't want to run all the tests when they > develop; only the bits they're working on. Surefire 2.19 introduced more > useful test filters which let us run a subset of the tests that brings the > build time down from 'come back tomorrow' to 'grab a coffee'. > For instance, if I only care about the S3 adaptor, I might run: > {code} > mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true > \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, > org.apache.hadoop.fs.s3a.*\" > {code} > We can work around this by specifying the surefire version on the command > line but it would be better, imo, to just update the default surefire used. > {code} > mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true > \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, > org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1
[ https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245292#comment-16245292 ] Akira Ajisaka commented on HADOOP-13514: Many hdfs tests failed by OOM :( {noformat} java.lang.OutOfMemoryError: unable to create new native thread {noformat} > Upgrade maven surefire plugin to 2.19.1 > --- > > Key: HADOOP-13514 > URL: https://issues.apache.org/jira/browse/HADOOP-13514 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.8.0 >Reporter: Ewan Higgs >Assignee: Akira Ajisaka > Attachments: HADOOP-13514-addendum.01.patch, > HADOOP-13514-testing.001.patch, HADOOP-13514-testing.002.patch, > HADOOP-13514-testing.003.patch, HADOOP-13514-testing.004.patch, > HADOOP-13514-testing.005.patch, HADOOP-13514.002.patch, > HADOOP-13514.003.patch, HADOOP-13514.004.patch, HADOOP-13514.005.patch, > HADOOP-13514.006.patch, surefire-2.19.patch > > > A lot of people working on Hadoop don't want to run all the tests when they > develop; only the bits they're working on. Surefire 2.19 introduced more > useful test filters which let us run a subset of the tests that brings the > build time down from 'come back tomorrow' to 'grab a coffee'. > For instance, if I only care about the S3 adaptor, I might run: > {code} > mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true > \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, > org.apache.hadoop.fs.s3a.*\" > {code} > We can work around this by specifying the surefire version on the command > line but it would be better, imo, to just update the default surefire used. > {code} > mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true > \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, > org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14960) Add GC time percentage monitor/alerter
[ https://issues.apache.org/jira/browse/HADOOP-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245274#comment-16245274 ] Xiao Chen commented on HADOOP-14960: Thanks Misha for the new changes by incorporating the timestamp and gctime into a class, and using a {{GcData}} class to handle update atomicity, looks pretty good! Please fix the checkstyle warnings. While you're at it, I have a few minor comments :) - Can we also {{setName}} on the {{GcTimeMonitor}} class, for better debuggability? - Let's add a precondition check on {{bufSize}} too, to make sure we don't allocate crazy sizes here (say, 1M?) - trivial Javadoc comments: {{put a limit on a number of GCTimeMonitor instances}} s/a number/the number/g {{@param observationWindowMs a period until now, over which the percentage}} s/a period until now, over which/the interval over which/ - We usually use javadoc comment style on the ASF license class header. Could you update {{GcTimeMonitor}}'s first line from {{/\*}} to {{/\*\*}}? > Add GC time percentage monitor/alerter > -- > > Key: HADOOP-14960 > URL: https://issues.apache.org/jira/browse/HADOOP-14960 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev > Attachments: HADOOP-14960.01.patch, HADOOP-14960.02.patch, > HADOOP-14960.03.patch > > > Currently class {{org.apache.hadoop.metrics2.source.JvmMetrics}} provides > several metrics related to GC. Unfortunately, all these metrics are not as > useful as they could be, because they don't answer the first and most > important question related to GC and JVM health: what percentage of time my > JVM is paused in GC? This percentage, calculated as the sum of the GC pauses > over some period, like 1 minute, divided by that period - is the most > convenient measure of the GC health because: > - it is just one number, and it's clear that, say, 1..5% is good, but 80..90% > is really bad > - it allows for easy apple-to-apple comparison between runs, even between > different apps > - when this metric reaches some critical value like 70%, it almost always > indicates a "GC death spiral", from which the app can recover only if it > drops some task(s) etc. > The existing "total GC time", "total number of GCs" etc. metrics only give > numbers that can be used to rougly estimate this percentage. Thus it is > suggested to add a new metric to this class, and possibly allow users to > register handlers that will be automatically invoked if this metric reaches > the specified threshold. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15026) Rebase ResourceEstimator start/stop scripts for branch-2
[ https://issues.apache.org/jira/browse/HADOOP-15026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245199#comment-16245199 ] Hadoop QA commented on HADOOP-15026: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 6s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 1s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 8s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 19s{color} | {color:green} hadoop-resourceestimator in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 30s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:17213a0 | | JIRA Issue | HADOOP-15026 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12896803/HADOOP-15026-branch-2-v3.patch | | Optional Tests | asflicense mvnsite unit shellcheck shelldocs | | uname | Linux ea081b2cc0ac 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-2 / 46a740a | | maven | version: Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) | | shellcheck | v0.4.6 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13654/testReport/ | | Max. process+thread count | 66 (vs. ulimit of 5000) | | modules | C: hadoop-tools/hadoop-resourceestimator U: hadoop-tools/hadoop-resourceestimator | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13654/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Rebase ResourceEstimator start/stop scripts for branch-2 > > > Key: HADOOP-15026 > URL: https://issues.apache.org/jira/browse/HADOOP-15026 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Subru Krishnan >Assignee: Rui Li > Attachments: HADOOP-15026-branch-2-v1.patch, > HADOOP-15026-branch-2-v2.patch, HADOOP-15026-branch-2-v3.patch > > > HADOOP-14840 introduced the {{ResourceEstimatorService}} which was > cherry-picked from trunk to branch-2. The start/stop scripts need minor > alignment with branch-2. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15026) Rebase ResourceEstimator start/stop scripts for branch-2
[ https://issues.apache.org/jira/browse/HADOOP-15026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li updated HADOOP-15026: Status: Open (was: Patch Available) > Rebase ResourceEstimator start/stop scripts for branch-2 > > > Key: HADOOP-15026 > URL: https://issues.apache.org/jira/browse/HADOOP-15026 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Subru Krishnan >Assignee: Rui Li > Attachments: HADOOP-15026-branch-2-v1.patch, > HADOOP-15026-branch-2-v2.patch > > > HADOOP-14840 introduced the {{ResourceEstimatorService}} which was > cherry-picked from trunk to branch-2. The start/stop scripts need minor > alignment with branch-2. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15026) Rebase ResourceEstimator start/stop scripts for branch-2
[ https://issues.apache.org/jira/browse/HADOOP-15026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li updated HADOOP-15026: Attachment: HADOOP-15026-branch-2-v3.patch I have manually tested the patch before uploading. > Rebase ResourceEstimator start/stop scripts for branch-2 > > > Key: HADOOP-15026 > URL: https://issues.apache.org/jira/browse/HADOOP-15026 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Subru Krishnan >Assignee: Rui Li > Attachments: HADOOP-15026-branch-2-v1.patch, > HADOOP-15026-branch-2-v2.patch, HADOOP-15026-branch-2-v3.patch > > > HADOOP-14840 introduced the {{ResourceEstimatorService}} which was > cherry-picked from trunk to branch-2. The start/stop scripts need minor > alignment with branch-2. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15026) Rebase ResourceEstimator start/stop scripts for branch-2
[ https://issues.apache.org/jira/browse/HADOOP-15026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li updated HADOOP-15026: Status: Patch Available (was: Open) > Rebase ResourceEstimator start/stop scripts for branch-2 > > > Key: HADOOP-15026 > URL: https://issues.apache.org/jira/browse/HADOOP-15026 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Subru Krishnan >Assignee: Rui Li > Attachments: HADOOP-15026-branch-2-v1.patch, > HADOOP-15026-branch-2-v2.patch, HADOOP-15026-branch-2-v3.patch > > > HADOOP-14840 introduced the {{ResourceEstimatorService}} which was > cherry-picked from trunk to branch-2. The start/stop scripts need minor > alignment with branch-2. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15026) Rebase ResourceEstimator start/stop scripts for branch-2
[ https://issues.apache.org/jira/browse/HADOOP-15026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245172#comment-16245172 ] Hadoop QA commented on HADOOP-15026: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 22s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 1s{color} | {color:red} The patch generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 10s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s{color} | {color:green} hadoop-resourceestimator in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 30m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:17213a0 | | JIRA Issue | HADOOP-15026 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12896797/HADOOP-15026-branch-2-v2.patch | | Optional Tests | asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 53a5e3a839a4 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-2 / 46a740a | | maven | version: Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) | | shellcheck | v0.4.6 | | shellcheck | https://builds.apache.org/job/PreCommit-HADOOP-Build/13653/artifact/out/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13653/testReport/ | | Max. process+thread count | 66 (vs. ulimit of 5000) | | modules | C: hadoop-tools/hadoop-resourceestimator U: hadoop-tools/hadoop-resourceestimator | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13653/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Rebase ResourceEstimator start/stop scripts for branch-2 > > > Key: HADOOP-15026 > URL: https://issues.apache.org/jira/browse/HADOOP-15026 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Subru Krishnan >Assignee: Rui Li > Attachments: HADOOP-15026-branch-2-v1.patch, > HADOOP-15026-branch-2-v2.patch > > > HADOOP-14840 introduced the {{ResourceEstimatorService}} which was > cherry-picked from trunk to branch-2. The start/stop scripts need minor > alignment with branch-2. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15026) Rebase ResourceEstimator start/stop scripts for branch-2
[ https://issues.apache.org/jira/browse/HADOOP-15026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li updated HADOOP-15026: Attachment: HADOOP-15026-branch-2-v2.patch > Rebase ResourceEstimator start/stop scripts for branch-2 > > > Key: HADOOP-15026 > URL: https://issues.apache.org/jira/browse/HADOOP-15026 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Subru Krishnan >Assignee: Rui Li > Attachments: HADOOP-15026-branch-2-v1.patch, > HADOOP-15026-branch-2-v2.patch > > > HADOOP-14840 introduced the {{ResourceEstimatorService}} which was > cherry-picked from trunk to branch-2. The start/stop scripts need minor > alignment with branch-2. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15026) Rebase ResourceEstimator start/stop scripts for branch-2
[ https://issues.apache.org/jira/browse/HADOOP-15026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li updated HADOOP-15026: Status: Patch Available (was: Open) > Rebase ResourceEstimator start/stop scripts for branch-2 > > > Key: HADOOP-15026 > URL: https://issues.apache.org/jira/browse/HADOOP-15026 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Subru Krishnan >Assignee: Rui Li > Attachments: HADOOP-15026-branch-2-v1.patch, > HADOOP-15026-branch-2-v2.patch > > > HADOOP-14840 introduced the {{ResourceEstimatorService}} which was > cherry-picked from trunk to branch-2. The start/stop scripts need minor > alignment with branch-2. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15026) Rebase ResourceEstimator start/stop scripts for branch-2
[ https://issues.apache.org/jira/browse/HADOOP-15026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li updated HADOOP-15026: Status: Open (was: Patch Available) > Rebase ResourceEstimator start/stop scripts for branch-2 > > > Key: HADOOP-15026 > URL: https://issues.apache.org/jira/browse/HADOOP-15026 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Subru Krishnan >Assignee: Rui Li > Attachments: HADOOP-15026-branch-2-v1.patch > > > HADOOP-14840 introduced the {{ResourceEstimatorService}} which was > cherry-picked from trunk to branch-2. The start/stop scripts need minor > alignment with branch-2. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14976) Allow overriding HADOOP_SHELL_EXECNAME
[ https://issues.apache.org/jira/browse/HADOOP-14976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245135#comment-16245135 ] Hadoop QA commented on HADOOP-14976: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 1m 19s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 8s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 5s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 9s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 41s{color} | {color:green} hadoop-yarn in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 57s{color} | {color:green} hadoop-mapreduce-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 64m 28s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-14976 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12896783/HADOOP-14976.03.patch | | Optional Tests | asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 51d74d28516b 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 49b4c0b | | maven | version: Apache Maven 3.3.9 | | shellcheck | v0.4.6 | | whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/13652/artifact/out/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13652/testReport/ | | Max. process+thread count | 336 (vs. ulimit of 5000) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-yarn-project/hadoop-yarn hadoop-mapreduce-project U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13652/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Allow overriding HADOOP_SHELL_EXECNAME > -- > > Key: HADOOP-14976 > URL: https://issues.apache.org/jira/browse/HADOOP-14976 > Project: Hadoop Common > Issue Type:
[jira] [Commented] (HADOOP-15025) Ensure singleton for ResourceEstimatorService
[ https://issues.apache.org/jira/browse/HADOOP-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245126#comment-16245126 ] Hudson commented on HADOOP-15025: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13207 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13207/]) HADOOP-15025. Ensure singleton for ResourceEstimatorService. (Rui Li via (subru: rev f2df6b8983aace73ad27934bd9f7f4d766e0b25f) * (edit) hadoop-tools/hadoop-resourceestimator/src/test/java/org/apache/hadoop/resourceestimator/service/TestResourceEstimatorService.java * (edit) hadoop-tools/hadoop-resourceestimator/src/main/java/org/apache/hadoop/resourceestimator/service/ResourceEstimatorService.java > Ensure singleton for ResourceEstimatorService > - > > Key: HADOOP-15025 > URL: https://issues.apache.org/jira/browse/HADOOP-15025 > Project: Hadoop Common > Issue Type: Bug >Reporter: Subru Krishnan >Assignee: Rui Li > Fix For: 2.9.0, 3.0.0 > > Attachments: HADOOP-15025-v1.patch, HADOOP-15025-v2.patch > > > HADOOP-15013 fixed static findbugs warnings but this has lead to the > singleton being broken for {{ResourceEstimatorService}}. This jira tracks the > fix for the same. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15025) Ensure singleton for ResourceEstimatorService
[ https://issues.apache.org/jira/browse/HADOOP-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated HADOOP-15025: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0 2.9.0 Status: Resolved (was: Patch Available) Thanks [~Rui Li] for the patch and [~asuresh] for the review, I committed this to trunk/branch-3/2.9/2.9.0. > Ensure singleton for ResourceEstimatorService > - > > Key: HADOOP-15025 > URL: https://issues.apache.org/jira/browse/HADOOP-15025 > Project: Hadoop Common > Issue Type: Bug >Reporter: Subru Krishnan >Assignee: Rui Li > Fix For: 2.9.0, 3.0.0 > > Attachments: HADOOP-15025-v1.patch, HADOOP-15025-v2.patch > > > HADOOP-15013 fixed static findbugs warnings but this has lead to the > singleton being broken for {{ResourceEstimatorService}}. This jira tracks the > fix for the same. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14995) Configuration should load resource by forced as option
[ https://issues.apache.org/jira/browse/HADOOP-14995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] DENG FEI updated HADOOP-14995: -- Attachment: HADOOP-14995.trunk-2.patch > Configuration should load resource by forced as option > -- > > Key: HADOOP-14995 > URL: https://issues.apache.org/jira/browse/HADOOP-14995 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Reporter: DENG FEI >Priority: Critical > Attachments: HADOOP-14995.trunk-001.patch, HADOOP-14995.trunk-2.patch > > > MapReduce Job, the task initialization by _job.xml_ , but if missing the soft > link of "job.xml" file,will do nothing under the quietmode of Configuration. > Such as map tasks as directly output to HDFS without reduce tasks of job, if > map task missing job.xml, it's will run with default behavior, just as > _IdentityMapper_ and output to local, and resulting in data loss. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15025) Ensure singleton for ResourceEstimatorService
[ https://issues.apache.org/jira/browse/HADOOP-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245089#comment-16245089 ] Hadoop QA commented on HADOOP-15025: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 36s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 46s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 31s{color} | {color:green} hadoop-resourceestimator in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 45m 15s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-15025 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12896774/HADOOP-15025-v2.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2dc1f8664628 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0de1068 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13651/testReport/ | | Max. process+thread count | 298 (vs. ulimit of 5000) | | modules | C: hadoop-tools/hadoop-resourceestimator U: hadoop-tools/hadoop-resourceestimator | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13651/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ensure singleton for ResourceEstimatorService > - > > Key: HADOOP-15025 >
[jira] [Commented] (HADOOP-15025) Ensure singleton for ResourceEstimatorService
[ https://issues.apache.org/jira/browse/HADOOP-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245087#comment-16245087 ] Hadoop QA commented on HADOOP-15025: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 55s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s{color} | {color:green} hadoop-resourceestimator in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 47m 48s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-15025 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12896774/HADOOP-15025-v2.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a038b7c207d3 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0de1068 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13650/testReport/ | | Max. process+thread count | 324 (vs. ulimit of 5000) | | modules | C: hadoop-tools/hadoop-resourceestimator U: hadoop-tools/hadoop-resourceestimator | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13650/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ensure singleton for ResourceEstimatorService > - > > Key: HADOOP-15025 >
[jira] [Updated] (HADOOP-14976) Allow overriding HADOOP_SHELL_EXECNAME
[ https://issues.apache.org/jira/browse/HADOOP-14976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HADOOP-14976: --- Attachment: HADOOP-14976.03.patch Thanks for the review. The v3 patch incorporates your suggestion. > Allow overriding HADOOP_SHELL_EXECNAME > -- > > Key: HADOOP-14976 > URL: https://issues.apache.org/jira/browse/HADOOP-14976 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HADOOP-14976.01.patch, HADOOP-14976.02.patch, > HADOOP-14976.03.patch > > > Some Hadoop shell scripts infer their own name using this bit of shell magic: > {code} > 18 MYNAME="${BASH_SOURCE-$0}" > 19 HADOOP_SHELL_EXECNAME="${MYNAME##*/}" > {code} > e.g. see the > [hdfs|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs#L18] > script. > The inferred shell script name is later passed to _hadoop-functions.sh_ which > uses it to construct the names of some environment variables. E.g. when > invoking _hdfs datanode_, the options variable name is inferred as follows: > {code} > # HDFS + DATANODE + OPTS -> HDFS_DATANODE_OPTS > {code} > This works well if the calling script name is standard {{hdfs}} or {{yarn}}. > If a distribution renames the script to something like foo.bar, , then the > variable names will be inferred as {{FOO.BAR_DATANODE_OPTS}}. This is not a > valid bash variable name. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15026) Rebase ResourceEstimator start/stop scripts for branch-2
[ https://issues.apache.org/jira/browse/HADOOP-15026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245057#comment-16245057 ] Hadoop QA commented on HADOOP-15026: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 4s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 1s{color} | {color:red} The patch generated 32 new + 0 unchanged - 0 fixed = 32 total (was 0) {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 10s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 19s{color} | {color:green} hadoop-resourceestimator in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 31m 43s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:17213a0 | | JIRA Issue | HADOOP-15026 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12896770/HADOOP-15026-branch-2-v1.patch | | Optional Tests | asflicense mvnsite unit shellcheck shelldocs | | uname | Linux fbb3cf22d64b 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-2 / 5486542 | | maven | version: Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) | | shellcheck | v0.4.6 | | shellcheck | https://builds.apache.org/job/PreCommit-HADOOP-Build/13649/artifact/out/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13649/testReport/ | | Max. process+thread count | 65 (vs. ulimit of 5000) | | modules | C: hadoop-tools/hadoop-resourceestimator U: hadoop-tools/hadoop-resourceestimator | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13649/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Rebase ResourceEstimator start/stop scripts for branch-2 > > > Key: HADOOP-15026 > URL: https://issues.apache.org/jira/browse/HADOOP-15026 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Subru Krishnan >Assignee: Rui Li > Attachments: HADOOP-15026-branch-2-v1.patch > > > HADOOP-14840 introduced the {{ResourceEstimatorService}} which was > cherry-picked from trunk to branch-2. The start/stop scripts need minor > alignment with branch-2. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15025) Ensure singleton for ResourceEstimatorService
[ https://issues.apache.org/jira/browse/HADOOP-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245051#comment-16245051 ] Arun Suresh commented on HADOOP-15025: -- The changes seem to have greatly simplified the tests. Thanks for the patch [~subru] and [~Rui Li] +1, pending jenkins > Ensure singleton for ResourceEstimatorService > - > > Key: HADOOP-15025 > URL: https://issues.apache.org/jira/browse/HADOOP-15025 > Project: Hadoop Common > Issue Type: Bug >Reporter: Subru Krishnan >Assignee: Rui Li > Attachments: HADOOP-15025-v1.patch, HADOOP-15025-v2.patch > > > HADOOP-15013 fixed static findbugs warnings but this has lead to the > singleton being broken for {{ResourceEstimatorService}}. This jira tracks the > fix for the same. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15025) Ensure singleton for ResourceEstimatorService
[ https://issues.apache.org/jira/browse/HADOOP-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated HADOOP-15025: Attachment: HADOOP-15025-v2.patch Fixing test to use Singleton resource. > Ensure singleton for ResourceEstimatorService > - > > Key: HADOOP-15025 > URL: https://issues.apache.org/jira/browse/HADOOP-15025 > Project: Hadoop Common > Issue Type: Bug >Reporter: Subru Krishnan >Assignee: Rui Li > Attachments: HADOOP-15025-v1.patch, HADOOP-15025-v2.patch > > > HADOOP-15013 fixed static findbugs warnings but this has lead to the > singleton being broken for {{ResourceEstimatorService}}. This jira tracks the > fix for the same. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15026) Rebase ResourceEstimator start/stop scripts for branch-2
[ https://issues.apache.org/jira/browse/HADOOP-15026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li updated HADOOP-15026: Status: Patch Available (was: Open) > Rebase ResourceEstimator start/stop scripts for branch-2 > > > Key: HADOOP-15026 > URL: https://issues.apache.org/jira/browse/HADOOP-15026 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Subru Krishnan >Assignee: Rui Li > Attachments: HADOOP-15026-branch-2-v1.patch > > > HADOOP-14840 introduced the {{ResourceEstimatorService}} which was > cherry-picked from trunk to branch-2. The start/stop scripts need minor > alignment with branch-2. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15026) Rebase ResourceEstimator start/stop scripts for branch-2
[ https://issues.apache.org/jira/browse/HADOOP-15026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li updated HADOOP-15026: Attachment: HADOOP-15026-branch-2-v1.patch > Rebase ResourceEstimator start/stop scripts for branch-2 > > > Key: HADOOP-15026 > URL: https://issues.apache.org/jira/browse/HADOOP-15026 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Subru Krishnan >Assignee: Rui Li > Attachments: HADOOP-15026-branch-2-v1.patch > > > HADOOP-14840 introduced the {{ResourceEstimatorService}} which was > cherry-picked from trunk to branch-2. The start/stop scripts need minor > alignment with branch-2. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15025) Ensure singleton for ResourceEstimatorService
[ https://issues.apache.org/jira/browse/HADOOP-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244931#comment-16244931 ] Hadoop QA commented on HADOOP-15025: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 5s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 55s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 51s{color} | {color:red} hadoop-resourceestimator in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 54m 10s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.resourceestimator.service.TestResourceEstimatorService | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-15025 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12896748/HADOOP-15025-v1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 31b5ab0328c4 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / cb35a59 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13648/artifact/out/patch-unit-hadoop-tools_hadoop-resourceestimator.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13648/testReport/ | | Max. process+thread count | 328 (vs. ulimit of 5000) | | modules | C: hadoop-tools/hadoop-resourceestimator U:
[jira] [Commented] (HADOOP-15025) Ensure singleton for ResourceEstimatorService
[ https://issues.apache.org/jira/browse/HADOOP-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244916#comment-16244916 ] Hadoop QA commented on HADOOP-15025: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 43s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 21s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 31s{color} | {color:red} hadoop-resourceestimator in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 41m 4s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.resourceestimator.service.TestResourceEstimatorService | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-15025 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12896748/HADOOP-15025-v1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a80bedfa48a9 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / cb35a59 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13647/artifact/out/patch-unit-hadoop-tools_hadoop-resourceestimator.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13647/testReport/ | | Max. process+thread count | 435 (vs. ulimit of 5000) | | modules | C: hadoop-tools/hadoop-resourceestimator U:
[jira] [Updated] (HADOOP-15025) Ensure singleton for ResourceEstimatorService
[ https://issues.apache.org/jira/browse/HADOOP-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li updated HADOOP-15025: Status: Patch Available (was: Open) > Ensure singleton for ResourceEstimatorService > - > > Key: HADOOP-15025 > URL: https://issues.apache.org/jira/browse/HADOOP-15025 > Project: Hadoop Common > Issue Type: Bug >Reporter: Subru Krishnan >Assignee: Rui Li > Attachments: HADOOP-15025-v1.patch > > > HADOOP-15013 fixed static findbugs warnings but this has lead to the > singleton being broken for {{ResourceEstimatorService}}. This jira tracks the > fix for the same. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15025) Ensure singleton for ResourceEstimatorService
[ https://issues.apache.org/jira/browse/HADOOP-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li updated HADOOP-15025: Attachment: HADOOP-15025-v1.patch Trivial fix to change the import to Jersey Singleton annotation. I have manually tested it and works fine > Ensure singleton for ResourceEstimatorService > - > > Key: HADOOP-15025 > URL: https://issues.apache.org/jira/browse/HADOOP-15025 > Project: Hadoop Common > Issue Type: Bug >Reporter: Subru Krishnan >Assignee: Rui Li > Attachments: HADOOP-15025-v1.patch > > > HADOOP-15013 fixed static findbugs warnings but this has lead to the > singleton being broken for {{ResourceEstimatorService}}. This jira tracks the > fix for the same. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15026) Rebase ResourceEstimator start/stop scripts for branch-2
[ https://issues.apache.org/jira/browse/HADOOP-15026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated HADOOP-15026: Affects Version/s: 2.9.0 > Rebase ResourceEstimator start/stop scripts for branch-2 > > > Key: HADOOP-15026 > URL: https://issues.apache.org/jira/browse/HADOOP-15026 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Subru Krishnan >Assignee: Rui Li > > HADOOP-14840 introduced the {{ResourceEstimatorService}} which was > cherry-picked from trunk to branch-2. The start/stop scripts need minor > alignment with branch-2. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15026) Rebase ResourceEstimator start/stop scripts for branch-2
Subru Krishnan created HADOOP-15026: --- Summary: Rebase ResourceEstimator start/stop scripts for branch-2 Key: HADOOP-15026 URL: https://issues.apache.org/jira/browse/HADOOP-15026 Project: Hadoop Common Issue Type: Bug Reporter: Subru Krishnan Assignee: Rui Li HADOOP-14840 introduced the {{ResourceEstimatorService}} which was cherry-picked from trunk to branch-2. The start/stop scripts need minor alignment with branch-2. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15025) Ensure singleton for ResourceEstimatorService
Subru Krishnan created HADOOP-15025: --- Summary: Ensure singleton for ResourceEstimatorService Key: HADOOP-15025 URL: https://issues.apache.org/jira/browse/HADOOP-15025 Project: Hadoop Common Issue Type: Bug Reporter: Subru Krishnan Assignee: Rui Li HADOOP-15013 fixed static findbugs warnings but this has lead to the singleton being broken for {{ResourceEstimatorService}}. This jira tracks the fix for the same. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14163) Refactor existing hadoop site to use more usable static website generator
[ https://issues.apache.org/jira/browse/HADOOP-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244699#comment-16244699 ] Allen Wittenauer commented on HADOOP-14163: --- bq. Hugo is a static website generator, mvn site is for generating project documentation. mvn site is also effectively a static website generator. So that's pretty much a wash. There are some big wins I don't see with hugo: a) maven is already part of our build process and already very familiar to pretty much the entire community b) maven is cross platform and available on MANY OSes/chipsets without having to compile it yourself to get there c) maven is already part of our CI pipeline... bq. to simplify the release process Sure, everything is simpler if it doesn't have any sort of testing phase. A mvn site-based install means we can actually use precommit to apply patches for testing. bq. With hugo it's very simple and easy to automatize thx to the flexible content handling and dynamic menu generation. To me, this seems like greatly overestimating the need and underestimating the existing toolset. The main website rarely gets updated. Using a different build and content management system won't help in that regard either. Also, just to clarify "How does this integrate with the docs from a release?"... I'm looking for "release manager takes the site tar ball and does X with it." Right now, none of that seems to be covered. bq. I think only the docs/downloads are using external links which are fine I think this might be a point of confusion. The downloads are external. But http://hadoop.apache.org/docs/ is supposed to be coming off this website. Updating that needs to be part of this patch/documentation for this process. Also, let me clarify the question a bit. If I build the website in hugo and then browse the public filesystem with a browser, things fail to load and links are broken. That's less than ideal. > Refactor existing hadoop site to use more usable static website generator > - > > Key: HADOOP-14163 > URL: https://issues.apache.org/jira/browse/HADOOP-14163 > Project: Hadoop Common > Issue Type: Improvement > Components: site >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HADOOP-14163-001.zip, HADOOP-14163-002.zip, > HADOOP-14163-003.zip, HADOOP-14163.004.patch, HADOOP-14163.005.patch, > HADOOP-14163.006.patch, HADOOP-14163.007.patch, HADOOP-14163.008.tar.gz, > hadoop-site.tar.gz, hadop-site-rendered.tar.gz > > > From the dev mailing list: > "Publishing can be attacked via a mix of scripting and revamping the darned > website. Forrest is pretty bad compared to the newer static site generators > out there (e.g. need to write XML instead of markdown, it's hard to review a > staging site because of all the absolute links, hard to customize, did I > mention XML?), and the look and feel of the site is from the 00s. We don't > actually have that much site content, so it should be possible to migrate to > a new system." > This issue is find a solution to migrate the old site to a new modern static > site generator using a more contemprary theme. > Goals: > * existing links should work (or at least redirected) > * It should be easy to add more content required by a release automatically > (most probably with creating separated markdown files) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14163) Refactor existing hadoop site to use more usable static website generator
[ https://issues.apache.org/jira/browse/HADOOP-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244662#comment-16244662 ] Elek, Marton commented on HADOOP-14163: --- about a) and b) Until know we used mvn site for generating the docs and apache forrest to generate the site. This patch just replaces the forrest part with hugo. Hugo is a static website generator, mvn site is for generating project documentation. For mvn site the reflow theme is almost the only one which could be used to do some modern reponsive site, with hugo there are hundred of more powerful themes. And the content usage of hugo is more natural. The final goal was (as I wrote on hadoop mailing list at 03/24/2017 where I also proposed to use hugo) to simplify the release process. With hugo it's very simple and easy to automatize thx to the flexible content handling and dynamic menu generation. I can imagine that it could be done with mvn site, but not in such an easy way. My suggestion was to use the mvn site based documentation as before (but agree, that long term we can switch to a reflow based better skin there as well) c.) I think only the docs/downloads are using external links which are fine. All the other links could be adjusted by the baseurl parameter of hugo (-b) {code} grep -r "https://hadoop; . ./config.toml:baseurl = "https://hadoop.apache.org; ./layouts/partials/navbar.html: https://hadoop.apache.org/docs/r{{.BaseFileName}}/;>{{.BaseFileName}} ./layouts/release/single.html: https://hadoop.apache.org/docs/r{{.BaseFileName }}" class="btn btn-primary">Documentation {code} > Refactor existing hadoop site to use more usable static website generator > - > > Key: HADOOP-14163 > URL: https://issues.apache.org/jira/browse/HADOOP-14163 > Project: Hadoop Common > Issue Type: Improvement > Components: site >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HADOOP-14163-001.zip, HADOOP-14163-002.zip, > HADOOP-14163-003.zip, HADOOP-14163.004.patch, HADOOP-14163.005.patch, > HADOOP-14163.006.patch, HADOOP-14163.007.patch, HADOOP-14163.008.tar.gz, > hadoop-site.tar.gz, hadop-site-rendered.tar.gz > > > From the dev mailing list: > "Publishing can be attacked via a mix of scripting and revamping the darned > website. Forrest is pretty bad compared to the newer static site generators > out there (e.g. need to write XML instead of markdown, it's hard to review a > staging site because of all the absolute links, hard to customize, did I > mention XML?), and the look and feel of the site is from the 00s. We don't > actually have that much site content, so it should be possible to migrate to > a new system." > This issue is find a solution to migrate the old site to a new modern static > site generator using a more contemprary theme. > Goals: > * existing links should work (or at least redirected) > * It should be easy to add more content required by a release automatically > (most probably with creating separated markdown files) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15003) Merge S3A committers into trunk: Yetus patch checker
[ https://issues.apache.org/jira/browse/HADOOP-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244539#comment-16244539 ] Aaron Fabbri commented on HADOOP-15003: --- Interesting. Sorry I didn't have more useful comments last night. I added more debug logging and since then have been unable to reproduce it. In bed last night I had an idea that maybe the huge file outputstream was not being closed reliably (thus no .pending being written in aboutToComplete() on close). This morning I looked at the test code {{test_010_CreateHugeFile() }} and it seems to be fine though; it explicitly closes the stream but also handles error case with try-with-resources block. I'll keep trying to reproduce it. > Merge S3A committers into trunk: Yetus patch checker > > > Key: HADOOP-15003 > URL: https://issues.apache.org/jira/browse/HADOOP-15003 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13786-041.patch, HADOOP-13786-042.patch, > HADOOP-13786-043.patch, HADOOP-13786-044.patch, HADOOP-13786-045.patch, > HADOOP-13786-046.patch > > > This is a Yetus only JIRA created to have Yetus review the > HADOOP-13786/HADOOP-14971 patch as a .patch file, as the review PR > [https://github.com/apache/hadoop/pull/282] is stopping this happening in > HADOOP-14971. > Reviews should go into the PR/other task -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15019) Hadoop shell script classpath de-duping ignores HADOOP_USER_CLASSPATH_FIRST
[ https://issues.apache.org/jira/browse/HADOOP-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244495#comment-16244495 ] Philip Zeyliger commented on HADOOP-15019: -- bq. Or, just use 'hadoop classpath' ... Yep, you're right. I had tried and failed, but I must have gotten something else wrong. > Hadoop shell script classpath de-duping ignores HADOOP_USER_CLASSPATH_FIRST > > > Key: HADOOP-15019 > URL: https://issues.apache.org/jira/browse/HADOOP-15019 > Project: Hadoop Common > Issue Type: Bug > Components: bin >Reporter: Philip Zeyliger > > If a user sets {{HADOOP_USER_CLASSPATH_FIRST=true}} and furthermore includes > a directory that's already in Hadoop's classpath via {{HADOOP_CLASSPATH}}, > that directory will appear later than it should in the eventual $CLASSPATH. I > believe this is because the de-duping at > https://github.com/apache/hadoop/blob/cbc632d9abf08c56a7fc02be51b2718af30bad28/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh#L1200 > is ignoring the "before/after" parameter. > My way of reproduction, first build the following trivial Java program: > {code} > $cat Test.java > public class Test { > public static void main(String[]args) { > System.out.println(System.getenv().get("CLASSPATH")); > } > } > $javac Test.java > $jar cf test.jar Test.class > {code} > With that, if you happen to have an entry in HADOOP_CLASSPATH that matches > what Hadoop would produce, you'll find the ordering not honored. It's easiest > to reproduce this with a match for HADOOP_CONF_DIR, as in the second case > below: > {code} > # As you'd expect, /usr/share is first! > $HADOOP_CONF_DIR=/etc HADOOP_USER_CLASSPATH_FIRST="true" > HADOOP_CLASSPATH=/usr/share:/tmp:/bin bin/hadoop jar test.jar Test | tr ':' > '\n' | grep -n . | grep '/usr/share' > WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete. > 1:/usr/share > # Surprise! /usr/share is now in the 3rd line, even thought it was first in > HADOOP_CLASSPATH. > $HADOOP_CONF_DIR=/usr/share HADOOP_USER_CLASSPATH_FIRST="true" > HADOOP_CLASSPATH=/usr/share:/tmp:/bin bin/hadoop jar test.jar Test | tr ':' > '\n' | grep -n . | grep '/usr/share' > WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete. > 3:/usr/share > {code} > To re-iterate, what's surprising is that you can make an entry that's first > in HADOOP_USER_CLASSPATH show up not first in the resulting classpath. > I ran into this configuring {{bin/hive}} with a confdir that was being used > for both HDFS and Hive, and flailing as to why my {{log4j2.properties}} > wasn't being read. The one in my conf dir was lower in my classpath than one > bundled in some Hive jar. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1
[ https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244397#comment-16244397 ] Allen Wittenauer commented on HADOOP-13514: --- No unreaped processes is a great thing to see. But this also implies that hadoop-hdfs-project's unit tests are broken beyond just that problem though. :( > Upgrade maven surefire plugin to 2.19.1 > --- > > Key: HADOOP-13514 > URL: https://issues.apache.org/jira/browse/HADOOP-13514 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.8.0 >Reporter: Ewan Higgs >Assignee: Akira Ajisaka > Attachments: HADOOP-13514-addendum.01.patch, > HADOOP-13514-testing.001.patch, HADOOP-13514-testing.002.patch, > HADOOP-13514-testing.003.patch, HADOOP-13514-testing.004.patch, > HADOOP-13514-testing.005.patch, HADOOP-13514.002.patch, > HADOOP-13514.003.patch, HADOOP-13514.004.patch, HADOOP-13514.005.patch, > HADOOP-13514.006.patch, surefire-2.19.patch > > > A lot of people working on Hadoop don't want to run all the tests when they > develop; only the bits they're working on. Surefire 2.19 introduced more > useful test filters which let us run a subset of the tests that brings the > build time down from 'come back tomorrow' to 'grab a coffee'. > For instance, if I only care about the S3 adaptor, I might run: > {code} > mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true > \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, > org.apache.hadoop.fs.s3a.*\" > {code} > We can work around this by specifying the surefire version on the command > line but it would be better, imo, to just update the default surefire used. > {code} > mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true > \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, > org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1
[ https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244344#comment-16244344 ] Hadoop QA commented on HADOOP-13514: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 57s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 48m 41s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 5s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 31s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}162m 19s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}274m 45s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport | | | hadoop.hdfs.server.namenode.TestFSEditLogLoader | | | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication | | | hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC | | | hadoop.hdfs.TestDistributedFileSystem | | | hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot | | | hadoop.hdfs.server.namenode.snapshot.TestSnapRootDescendantDiff | | | hadoop.hdfs.server.namenode.TestStripedINodeFile | | | hadoop.hdfs.server.datanode.TestBlockScanner | | | hadoop.hdfs.server.namenode.TestFileContextAcl | | | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot | | | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport | | | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing | | | hadoop.hdfs.server.namenode.TestNamenodeStorageDirectives | | | hadoop.hdfs.server.namenode.TestFileContextXAttr | | | hadoop.hdfs.server.namenode.TestSnapshotPathINodes | | | hadoop.hdfs.server.datanode.TestBlockRecovery | | | hadoop.hdfs.server.namenode.TestListCorruptFileBlocks |
[jira] [Commented] (HADOOP-15003) Merge S3A committers into trunk: Yetus patch checker
[ https://issues.apache.org/jira/browse/HADOOP-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244232#comment-16244232 ] Ryan Blue commented on HADOOP-15003: I don't think the partitioned committer should continue the _SUCCESS marker convention. Nothing that writes partitioned data currently depends on _SUCCESS markers, so it's easy to avoid the problem entirely because the markers are unreliable: what happens when you're appending data to a partition? We implemented a property that allows users to opt in to have _SUCCESS created for the directory output committer only. It creates the _SUCCESS marker after all other operations have finished because that's when we can guarantee that the write was successful. It doesn't delete other markers because there are no well-defined semantics for _SUCCESS with overwrite. > Merge S3A committers into trunk: Yetus patch checker > > > Key: HADOOP-15003 > URL: https://issues.apache.org/jira/browse/HADOOP-15003 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13786-041.patch, HADOOP-13786-042.patch, > HADOOP-13786-043.patch, HADOOP-13786-044.patch, HADOOP-13786-045.patch, > HADOOP-13786-046.patch > > > This is a Yetus only JIRA created to have Yetus review the > HADOOP-13786/HADOOP-14971 patch as a .patch file, as the review PR > [https://github.com/apache/hadoop/pull/282] is stopping this happening in > HADOOP-14971. > Reviews should go into the PR/other task -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15003) Merge S3A committers into trunk: Yetus patch checker
[ https://issues.apache.org/jira/browse/HADOOP-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244157#comment-16244157 ] Steve Loughran commented on HADOOP-15003: - Think I've found one little issue which could create problems, though not by its own the root cause. Seem closer to replicating it though, and yes, seems related to cleanup of jobs. One little feature is that the committers were being (over?) zealous in aborting all MPUs under their destination path, on the basis that failed tasks could have left outstanding MPUs which, if the data were not persisted, would not be enumerable by looking for .pendingset files. But {{S3aFileSystem.listMultipartUploads(prefix)}} actually turns out to list everything matching the prefix, even in parallel dirs, as it isn't adding a "/" suffix to say "directory only". Which meant if you had a job commit to "test/myjob", i'd also delete pending uploads to "test/myjob2" There's more to it than that; I think I'll need to review all listing stuff to be sure, but now I have some tests failing. Provided the tests themselves are correct, then I'll be able to find/fix it. > Merge S3A committers into trunk: Yetus patch checker > > > Key: HADOOP-15003 > URL: https://issues.apache.org/jira/browse/HADOOP-15003 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13786-041.patch, HADOOP-13786-042.patch, > HADOOP-13786-043.patch, HADOOP-13786-044.patch, HADOOP-13786-045.patch, > HADOOP-13786-046.patch > > > This is a Yetus only JIRA created to have Yetus review the > HADOOP-13786/HADOOP-14971 patch as a .patch file, as the review PR > [https://github.com/apache/hadoop/pull/282] is stopping this happening in > HADOOP-14971. > Reviews should go into the PR/other task -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15003) Merge S3A committers into trunk: Yetus patch checker
[ https://issues.apache.org/jira/browse/HADOOP-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15003: Status: Open (was: Patch Available) > Merge S3A committers into trunk: Yetus patch checker > > > Key: HADOOP-15003 > URL: https://issues.apache.org/jira/browse/HADOOP-15003 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13786-041.patch, HADOOP-13786-042.patch, > HADOOP-13786-043.patch, HADOOP-13786-044.patch, HADOOP-13786-045.patch, > HADOOP-13786-046.patch > > > This is a Yetus only JIRA created to have Yetus review the > HADOOP-13786/HADOOP-14971 patch as a .patch file, as the review PR > [https://github.com/apache/hadoop/pull/282] is stopping this happening in > HADOOP-14971. > Reviews should go into the PR/other task -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15003) Merge S3A committers into trunk: Yetus patch checker
[ https://issues.apache.org/jira/browse/HADOOP-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243916#comment-16243916 ] Steve Loughran commented on HADOOP-15003: - Side issue, one for [~rdblue] in particular If the job commit fails, should any existing {{_SUCCESS}} file be deleted? Because things may be inconsistent & we shouldn't have any cue in the dir that it contains valid data. What I'd do is: delete the marker before any files are committed or deleted, but after any conflict resolution checks which may fail the job have run. That way, if a job commit is aborted: the existing marker is unchanged. * directory committer: delete it for the APPEND case; retain for FAIL (its deleted for free in OVERWRITE) * partitioned committer: both OVERWRITE and APPEND to delete the marker, though append will Moot for the magic commit as it doesn't do any in-situ overwrites/appends of data. If you think we should do this, I can add it as a folowup JIRA; would need test modification > Merge S3A committers into trunk: Yetus patch checker > > > Key: HADOOP-15003 > URL: https://issues.apache.org/jira/browse/HADOOP-15003 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13786-041.patch, HADOOP-13786-042.patch, > HADOOP-13786-043.patch, HADOOP-13786-044.patch, HADOOP-13786-045.patch, > HADOOP-13786-046.patch > > > This is a Yetus only JIRA created to have Yetus review the > HADOOP-13786/HADOOP-14971 patch as a .patch file, as the review PR > [https://github.com/apache/hadoop/pull/282] is stopping this happening in > HADOOP-14971. > Reviews should go into the PR/other task -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13980) S3Guard CLI: Add fsck check command
[ https://issues.apache.org/jira/browse/HADOOP-13980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243835#comment-16243835 ] Steve Loughran commented on HADOOP-13980: - what about `fsync --check` and `fsync --fix`? We have a `s3guard import` command, but it assumes the table is unpopulated. Here I'm thinking "we have the table, but it may have diverged after failures. Check, or check and fix" > S3Guard CLI: Add fsck check command > --- > > Key: HADOOP-13980 > URL: https://issues.apache.org/jira/browse/HADOOP-13980 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri > > As discussed in HADOOP-13650, we want to add an S3Guard CLI command which > compares S3 with MetadataStore, and returns a failure status if any > invariants are violated. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14468) S3Guard: make short-circuit getFileStatus() configurable
[ https://issues.apache.org/jira/browse/HADOOP-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243832#comment-16243832 ] Steve Loughran commented on HADOOP-14468: - FWIW not doing the unshort-circuited check will save $0.004 $0.01.3 c/open() call in the case the file is missing; $0.004 if the file is actually there > S3Guard: make short-circuit getFileStatus() configurable > > > Key: HADOOP-14468 > URL: https://issues.apache.org/jira/browse/HADOOP-14468 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri >Priority: Minor > > Currently, when S3Guard is enabled, getFileStatus() will skip S3 if it gets a > result from the MetadataStore (e.g. dynamodb) first. > I would like to add a new parameter > {{fs.s3a.metadatastore.getfilestatus.authoritative}} which, when true, keeps > the current behavior. When false, S3AFileSystem will check both S3 and the > MetadataStore. > I'm not sure yet if we want to have this behavior the same for all callers of > getFileStatus(), or if we only want to check both S3 and MetadataStore for > some internal callers such as open(). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15003) Merge S3A committers into trunk: Yetus patch checker
[ https://issues.apache.org/jira/browse/HADOOP-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243827#comment-16243827 ] Steve Loughran commented on HADOOP-15003: - thanks for this, I'll look through the @after code too. # All the other scale tests work in parallel, so I don't think it's their teardown, more something happening in a parallel test # which can include a job commit happening on a different path # as well as some test setup doing autopurge of old stuff # test is getting filename inconsistent; not correctly working out path of pending file The fact that you can see it and I can't makes me think it's either some race condition or config. The fact you can consistently see it means that it's unlikely to be a race, as that's going to to fail intermittently. I'd consider parallel tests purging buckets. But: how is that going to clean up the file under __magic? Lose the uncommitted MPUs, yes. Here's my plan # make sure that the code logs @ info when a magic file is saved, some message like "file $dest... ready to commit, commit metadata stored at $pending." # failing test case to add some asserts in the test about existence of pending files under a path. This will help differentiate purge of commit data from purge of files. Adding it in the test case which creates the file will verify that any delete/purge happens between the two # add a test which explicitly deletes the MPU of a pending commit, see what happens # add a test which explicitly runs two job commits in parallel to separate paths: verifies isolation. # review troubleshooting docs. Once your list MPU CI # Have all commands to purge pending MPUs actually list@info the files. After all, in normal execution there shouldn't be any. That CLI you've proposed for listing MPUS, "hadoop s3guard uploads" will help diagnose stuff in the field > Merge S3A committers into trunk: Yetus patch checker > > > Key: HADOOP-15003 > URL: https://issues.apache.org/jira/browse/HADOOP-15003 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13786-041.patch, HADOOP-13786-042.patch, > HADOOP-13786-043.patch, HADOOP-13786-044.patch, HADOOP-13786-045.patch, > HADOOP-13786-046.patch > > > This is a Yetus only JIRA created to have Yetus review the > HADOOP-13786/HADOOP-14971 patch as a .patch file, as the review PR > [https://github.com/apache/hadoop/pull/282] is stopping this happening in > HADOOP-14971. > Reviews should go into the PR/other task -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13974) S3a CLI to support list/purge of pending multipart commits
[ https://issues.apache.org/jira/browse/HADOOP-13974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243825#comment-16243825 ] Steve Loughran commented on HADOOP-13974: - Example bash script in new uploads section refers to " hadoop uploads"; should be "hadoop s3guard uploads" > S3a CLI to support list/purge of pending multipart commits > -- > > Key: HADOOP-13974 > URL: https://issues.apache.org/jira/browse/HADOOP-13974 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Aaron Fabbri > Attachments: HADOOP-13974.001.patch, HADOOP-13974.002.patch, > HADOOP-13974.003.patch > > > The S3A CLI will need to be able to list and delete pending multipart > commits. > We can do the cleanup already via fs.s3a properties. The CLI will let scripts > stat for outstanding data (have a different exit code) and permit batch jobs > to explicitly trigger cleanups. > This will become critical with the multipart committer, as there's a > significantly higher likelihood of commits remaining outstanding. > We may also want to be able to enumerate/cancel all pending commits in the FS > tree -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1
[ https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13514: --- Target Version/s: 3.0.0, 2.10.0 (was: 3.0.0) > Upgrade maven surefire plugin to 2.19.1 > --- > > Key: HADOOP-13514 > URL: https://issues.apache.org/jira/browse/HADOOP-13514 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.8.0 >Reporter: Ewan Higgs >Assignee: Akira Ajisaka > Attachments: HADOOP-13514-addendum.01.patch, > HADOOP-13514-testing.001.patch, HADOOP-13514-testing.002.patch, > HADOOP-13514-testing.003.patch, HADOOP-13514-testing.004.patch, > HADOOP-13514-testing.005.patch, HADOOP-13514.002.patch, > HADOOP-13514.003.patch, HADOOP-13514.004.patch, HADOOP-13514.005.patch, > HADOOP-13514.006.patch, surefire-2.19.patch > > > A lot of people working on Hadoop don't want to run all the tests when they > develop; only the bits they're working on. Surefire 2.19 introduced more > useful test filters which let us run a subset of the tests that brings the > build time down from 'come back tomorrow' to 'grab a coffee'. > For instance, if I only care about the S3 adaptor, I might run: > {code} > mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true > \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, > org.apache.hadoop.fs.s3a.*\" > {code} > We can work around this by specifying the surefire version on the command > line but it would be better, imo, to just update the default surefire used. > {code} > mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true > \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, > org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1
[ https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13514: --- Attachment: HADOOP-13514.006.patch Great! The RM tests passed without timeout. Attaching 006 patch (ready for commit): * Add back the change in BUILDING.txt * Removed dummy change from the RM module > Upgrade maven surefire plugin to 2.19.1 > --- > > Key: HADOOP-13514 > URL: https://issues.apache.org/jira/browse/HADOOP-13514 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.8.0 >Reporter: Ewan Higgs >Assignee: Akira Ajisaka > Attachments: HADOOP-13514-addendum.01.patch, > HADOOP-13514-testing.001.patch, HADOOP-13514-testing.002.patch, > HADOOP-13514-testing.003.patch, HADOOP-13514-testing.004.patch, > HADOOP-13514-testing.005.patch, HADOOP-13514.002.patch, > HADOOP-13514.003.patch, HADOOP-13514.004.patch, HADOOP-13514.005.patch, > HADOOP-13514.006.patch, surefire-2.19.patch > > > A lot of people working on Hadoop don't want to run all the tests when they > develop; only the bits they're working on. Surefire 2.19 introduced more > useful test filters which let us run a subset of the tests that brings the > build time down from 'come back tomorrow' to 'grab a coffee'. > For instance, if I only care about the S3 adaptor, I might run: > {code} > mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true > \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, > org.apache.hadoop.fs.s3a.*\" > {code} > We can work around this by specifying the surefire version on the command > line but it would be better, imo, to just update the default surefire used. > {code} > mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true > \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, > org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1
[ https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243808#comment-16243808 ] Hadoop QA commented on HADOOP-13514: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 63m 42s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 6s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 43s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 53m 41s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 44s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 7s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s{color} | {color:green} hadoop-client-integration-tests in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}151m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-13514 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12896616/HADOOP-13514-testing.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 36e8484cb905 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 410d031 | | maven | version: Apache Maven 3.3.9 | |
[jira] [Commented] (HADOOP-15024) AliyunOSS: Provide oss client side Hadoop version information to oss server
[ https://issues.apache.org/jira/browse/HADOOP-15024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243788#comment-16243788 ] Steve Loughran commented on HADOOP-15024: - Makes sense: the other clients do this. S3A actually lets you define a UA, which is added to the user-agent string after the version. That way specific apps can be marked, which is useful if you are using access logs to identify app use, cost, etc. It may be good to include that in this patch Usual policy: which test endpoint? > AliyunOSS: Provide oss client side Hadoop version information to oss server > --- > > Key: HADOOP-15024 > URL: https://issues.apache.org/jira/browse/HADOOP-15024 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/oss >Reporter: SammiChen >Assignee: SammiChen > Attachments: HADOOP-15024.000.patch > > > Provide oss client side Hadoop version to oss server, to help build access > statistic metrics. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14964) AliyunOSS: backport HADOOP-12756 to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243732#comment-16243732 ] Hadoop QA commented on HADOOP-14964: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 41s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 19 new or modified test files. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 35s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 37s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 29s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 41s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 20s{color} | {color:green} branch-2 passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-tools hadoop-tools/hadoop-tools-dist {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 19s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s{color} | {color:green} hadoop-aliyun in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}130m 54s{color} | {color:red} hadoop-tools in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s{color} | {color:green} hadoop-tools-dist in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}201m 12s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Unreaped Processes | hadoop-tools:7 | | Failed junit tests | hadoop.yarn.sls.TestSLSRunner | | | hadoop.yarn.sls.nodemanager.TestNMSimulator | | | hadoop.tools.TestIntegration | | | hadoop.tools.TestDistCpViewFs | | | hadoop.resourceestimator.solver.impl.TestLpSolver | | | hadoop.resourceestimator.service.TestResourceEstimatorService | |
[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1
[ https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243640#comment-16243640 ] Akira Ajisaka commented on HADOOP-13514: Thanks [~aw] for the information. Let me see whether the timeout occurs or not in RM tests. > Upgrade maven surefire plugin to 2.19.1 > --- > > Key: HADOOP-13514 > URL: https://issues.apache.org/jira/browse/HADOOP-13514 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.8.0 >Reporter: Ewan Higgs >Assignee: Akira Ajisaka > Attachments: HADOOP-13514-addendum.01.patch, > HADOOP-13514-testing.001.patch, HADOOP-13514-testing.002.patch, > HADOOP-13514-testing.003.patch, HADOOP-13514-testing.004.patch, > HADOOP-13514-testing.005.patch, HADOOP-13514.002.patch, > HADOOP-13514.003.patch, HADOOP-13514.004.patch, HADOOP-13514.005.patch, > surefire-2.19.patch > > > A lot of people working on Hadoop don't want to run all the tests when they > develop; only the bits they're working on. Surefire 2.19 introduced more > useful test filters which let us run a subset of the tests that brings the > build time down from 'come back tomorrow' to 'grab a coffee'. > For instance, if I only care about the S3 adaptor, I might run: > {code} > mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true > \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, > org.apache.hadoop.fs.s3a.*\" > {code} > We can work around this by specifying the surefire version on the command > line but it would be better, imo, to just update the default surefire used. > {code} > mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true > \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, > org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1
[ https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13514: --- Attachment: HADOOP-13514-testing.005.patch 005-testing (not ready for commit): * Use surefire plugin 2.20.1 * Run RM tests by dummy change > Upgrade maven surefire plugin to 2.19.1 > --- > > Key: HADOOP-13514 > URL: https://issues.apache.org/jira/browse/HADOOP-13514 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.8.0 >Reporter: Ewan Higgs >Assignee: Akira Ajisaka > Attachments: HADOOP-13514-addendum.01.patch, > HADOOP-13514-testing.001.patch, HADOOP-13514-testing.002.patch, > HADOOP-13514-testing.003.patch, HADOOP-13514-testing.004.patch, > HADOOP-13514-testing.005.patch, HADOOP-13514.002.patch, > HADOOP-13514.003.patch, HADOOP-13514.004.patch, HADOOP-13514.005.patch, > surefire-2.19.patch > > > A lot of people working on Hadoop don't want to run all the tests when they > develop; only the bits they're working on. Surefire 2.19 introduced more > useful test filters which let us run a subset of the tests that brings the > build time down from 'come back tomorrow' to 'grab a coffee'. > For instance, if I only care about the S3 adaptor, I might run: > {code} > mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true > \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, > org.apache.hadoop.fs.s3a.*\" > {code} > We can work around this by specifying the surefire version on the command > line but it would be better, imo, to just update the default surefire used. > {code} > mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true > \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, > org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15024) AliyunOSS: Provide oss client side Hadoop version information to oss server
[ https://issues.apache.org/jira/browse/HADOOP-15024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243618#comment-16243618 ] Hadoop QA commented on HADOOP-15024: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 8m 58s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 42s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 58s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s{color} | {color:green} hadoop-aliyun in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 50m 18s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-15024 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12896600/HADOOP-15024.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 87673f722000 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bb8a6ee | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13644/testReport/ | | Max. process+thread count | 298 (vs. ulimit of 5000) | | modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13644/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > AliyunOSS: Provide oss client side Hadoop
[jira] [Updated] (HADOOP-15024) AliyunOSS: Provide oss client side Hadoop version information to oss server
[ https://issues.apache.org/jira/browse/HADOOP-15024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-15024: --- Status: Patch Available (was: Open) > AliyunOSS: Provide oss client side Hadoop version information to oss server > --- > > Key: HADOOP-15024 > URL: https://issues.apache.org/jira/browse/HADOOP-15024 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/oss >Reporter: SammiChen >Assignee: SammiChen > Attachments: HADOOP-15024.000.patch > > > Provide oss client side Hadoop version to oss server, to help build access > statistic metrics. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14872) CryptoInputStream should implement unbuffer
[ https://issues.apache.org/jira/browse/HADOOP-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-14872: Status: Open (was: Patch Available) > CryptoInputStream should implement unbuffer > --- > > Key: HADOOP-14872 > URL: https://issues.apache.org/jira/browse/HADOOP-14872 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, security >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-14872.001.patch, HADOOP-14872.002.patch, > HADOOP-14872.003.patch, HADOOP-14872.004.patch, HADOOP-14872.005.patch, > HADOOP-14872.006.patch, HADOOP-14872.007.patch, HADOOP-14872.008.patch, > HADOOP-14872.009.patch, HADOOP-14872.010.patch, HADOOP-14872.011.patch, > HADOOP-14872.012.patch, HADOOP-14872.013.patch > > > Discovered in IMPALA-5909. > Opening an encrypted HDFS file returns a chain of wrapped input streams: > {noformat} > HdfsDataInputStream > CryptoInputStream > DFSInputStream > {noformat} > If an application such as Impala or HBase calls HdfsDataInputStream#unbuffer, > FSDataInputStream#unbuffer will be called: > {code:java} > try { > ((CanUnbuffer)in).unbuffer(); > } catch (ClassCastException e) { > throw new UnsupportedOperationException("this stream does not " + > "support unbuffering."); > } > {code} > If the {{in}} class does not implement CanUnbuffer, UOE will be thrown. If > the application is not careful, tons of UOEs will show up in logs. > In comparison, opening an non-encrypted HDFS file returns this chain: > {noformat} > HdfsDataInputStream > DFSInputStream > {noformat} > DFSInputStream implements CanUnbuffer. > It is good for CryptoInputStream to implement CanUnbuffer for 2 reasons: > * Release buffer, cache, or any other resource when instructed > * Able to call its wrapped DFSInputStream unbuffer > * Avoid the UOE described above. Applications may not handle the UOE very > well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15024) AliyunOSS: Provide oss client side Hadoop version information to oss server
[ https://issues.apache.org/jira/browse/HADOOP-15024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-15024: --- Attachment: HADOOP-15024.000.patch > AliyunOSS: Provide oss client side Hadoop version information to oss server > --- > > Key: HADOOP-15024 > URL: https://issues.apache.org/jira/browse/HADOOP-15024 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/oss >Reporter: SammiChen >Assignee: SammiChen > Attachments: HADOOP-15024.000.patch > > > Provide oss client side Hadoop version to oss server, to help build access > statistic metrics. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15024) AliyunOSS: Provide oss client side Hadoop version information to oss server
[ https://issues.apache.org/jira/browse/HADOOP-15024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-15024: --- Attachment: (was: HADOOP-15024.000.patch) > AliyunOSS: Provide oss client side Hadoop version information to oss server > --- > > Key: HADOOP-15024 > URL: https://issues.apache.org/jira/browse/HADOOP-15024 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/oss >Reporter: SammiChen >Assignee: SammiChen > > Provide oss client side Hadoop version to oss server, to help build access > statistic metrics. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15024) AliyunOSS: Provide oss client side Hadoop version information to oss server
[ https://issues.apache.org/jira/browse/HADOOP-15024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-15024: --- Attachment: HADOOP-15024.000.patch Initial patch > AliyunOSS: Provide oss client side Hadoop version information to oss server > --- > > Key: HADOOP-15024 > URL: https://issues.apache.org/jira/browse/HADOOP-15024 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/oss >Reporter: SammiChen >Assignee: SammiChen > Attachments: HADOOP-15024.000.patch > > > Provide oss client side Hadoop version to oss server, to help build access > statistic metrics. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14872) CryptoInputStream should implement unbuffer
[ https://issues.apache.org/jira/browse/HADOOP-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-14872: Status: Patch Available (was: Open) > CryptoInputStream should implement unbuffer > --- > > Key: HADOOP-14872 > URL: https://issues.apache.org/jira/browse/HADOOP-14872 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, security >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-14872.001.patch, HADOOP-14872.002.patch, > HADOOP-14872.003.patch, HADOOP-14872.004.patch, HADOOP-14872.005.patch, > HADOOP-14872.006.patch, HADOOP-14872.007.patch, HADOOP-14872.008.patch, > HADOOP-14872.009.patch, HADOOP-14872.010.patch, HADOOP-14872.011.patch, > HADOOP-14872.012.patch, HADOOP-14872.013.patch > > > Discovered in IMPALA-5909. > Opening an encrypted HDFS file returns a chain of wrapped input streams: > {noformat} > HdfsDataInputStream > CryptoInputStream > DFSInputStream > {noformat} > If an application such as Impala or HBase calls HdfsDataInputStream#unbuffer, > FSDataInputStream#unbuffer will be called: > {code:java} > try { > ((CanUnbuffer)in).unbuffer(); > } catch (ClassCastException e) { > throw new UnsupportedOperationException("this stream does not " + > "support unbuffering."); > } > {code} > If the {{in}} class does not implement CanUnbuffer, UOE will be thrown. If > the application is not careful, tons of UOEs will show up in logs. > In comparison, opening an non-encrypted HDFS file returns this chain: > {noformat} > HdfsDataInputStream > DFSInputStream > {noformat} > DFSInputStream implements CanUnbuffer. > It is good for CryptoInputStream to implement CanUnbuffer for 2 reasons: > * Release buffer, cache, or any other resource when instructed > * Able to call its wrapped DFSInputStream unbuffer > * Avoid the UOE described above. Applications may not handle the UOE very > well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14964) AliyunOSS: backport HADOOP-12756 to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243484#comment-16243484 ] SammiChen edited comment on HADOOP-14964 at 11/8/17 7:39 AM: - Hi [~uncleGen], I uploaded a patch to port the latest Aliyun OSS from trunk to branch-2. Would you please help to review it? Besides, I also tried to port OSS to branch-2.7. But there is one transitive dependency issue. OSS SDK requires httpclient-4.5.2, while branch-2.7 has other modules which only support httpclient-4.2.5. With the old httpclient-4.2.5, ossclient can't connect to ossserver successfully. Do you have any suggestion on it? was (Author: sammi): Porting latest Aliyun OSS from trunk to branch-2 > AliyunOSS: backport HADOOP-12756 to branch-2 > > > Key: HADOOP-14964 > URL: https://issues.apache.org/jira/browse/HADOOP-14964 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Reporter: Genmao Yu >Assignee: Genmao Yu > Attachments: HADOOP-14964-branch-2.000.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14964) AliyunOSS: backport HADOOP-12756 to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-14964: --- Attachment: HADOOP-14964-branch-2.000.patch Porting latest Aliyun OSS from trunk to branch-2 > AliyunOSS: backport HADOOP-12756 to branch-2 > > > Key: HADOOP-14964 > URL: https://issues.apache.org/jira/browse/HADOOP-14964 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Reporter: Genmao Yu >Assignee: Genmao Yu > Attachments: HADOOP-14964-branch-2.000.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org