[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage
[ https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375362#comment-16375362 ] ASF GitHub Bot commented on YARN-2162: -- Github user flyrain commented on the issue: https://github.com/apache/hadoop/pull/261 Committed > add ability in Fair Scheduler to optionally configure maxResources in terms > of percentage > - > > Key: YARN-2162 > URL: https://issues.apache.org/jira/browse/YARN-2162 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler, scheduler >Reporter: Ashwin Shankar >Assignee: Yufei Gu >Priority: Major > Labels: scheduler > Fix For: 2.9.0, 3.0.0, 3.1.0 > > Attachments: YARN-2162.001.patch, YARN-2162.002.patch, > YARN-2162.003.patch, YARN-2162.004.patch, YARN-2162.005.patch, > YARN-2162.006.patch, YARN-2162.007.patch, YARN-2162.008.patch, > YARN-2162.branch-2.010.patch, YARN-2162.branch-3.0.009.patch, > test-400nm-200app-2k_NODE_UPDATE.timecost.svg > > > minResources and maxResources in fair scheduler configs are expressed in > terms of absolute numbers X mb, Y vcores. > As a result, when we expand or shrink our hadoop cluster, we need to > recalculate and change minResources/maxResources accordingly, which is pretty > inconvenient. > We can circumvent this problem if we can optionally configure these > properties in terms of percentage of cluster capacity. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage
[ https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375363#comment-16375363 ] ASF GitHub Bot commented on YARN-2162: -- GitHub user flyrain reopened a pull request: https://github.com/apache/hadoop/pull/261 YARN-2162. add ability to optionally configure maxResources in terms … …of percentage You can merge this pull request into a Git repository by running: $ git pull https://github.com/flyrain/hadoop yarn-2162 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hadoop/pull/261.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #261 commit bd8f446eaa17a35d1b50e6206f82374ca1600125 Author: Yufei GuDate: 2017-09-22T02:20:37Z YARN-2162 commit 977b6eab736d08639c0b3e624b041521181346dd Author: Yufei Gu Date: 2017-09-25T17:35:23Z YARN-2162 fixed style issues commit c68fe76adc0f48791a369a49119c88adf3d10c8d Author: Yufei Gu Date: 2017-09-28T18:28:52Z YARN-2162 commit 65ce5f14dd2d8f8560e2b7389ee830c225a2df25 Author: Yufei Gu Date: 2017-10-05T17:06:40Z YARN-2162 > add ability in Fair Scheduler to optionally configure maxResources in terms > of percentage > - > > Key: YARN-2162 > URL: https://issues.apache.org/jira/browse/YARN-2162 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler, scheduler >Reporter: Ashwin Shankar >Assignee: Yufei Gu >Priority: Major > Labels: scheduler > Fix For: 2.9.0, 3.0.0, 3.1.0 > > Attachments: YARN-2162.001.patch, YARN-2162.002.patch, > YARN-2162.003.patch, YARN-2162.004.patch, YARN-2162.005.patch, > YARN-2162.006.patch, YARN-2162.007.patch, YARN-2162.008.patch, > YARN-2162.branch-2.010.patch, YARN-2162.branch-3.0.009.patch, > test-400nm-200app-2k_NODE_UPDATE.timecost.svg > > > minResources and maxResources in fair scheduler configs are expressed in > terms of absolute numbers X mb, Y vcores. > As a result, when we expand or shrink our hadoop cluster, we need to > recalculate and change minResources/maxResources accordingly, which is pretty > inconvenient. > We can circumvent this problem if we can optionally configure these > properties in terms of percentage of cluster capacity. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage
[ https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375361#comment-16375361 ] ASF GitHub Bot commented on YARN-2162: -- Github user flyrain closed the pull request at: https://github.com/apache/hadoop/pull/261 > add ability in Fair Scheduler to optionally configure maxResources in terms > of percentage > - > > Key: YARN-2162 > URL: https://issues.apache.org/jira/browse/YARN-2162 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler, scheduler >Reporter: Ashwin Shankar >Assignee: Yufei Gu >Priority: Major > Labels: scheduler > Fix For: 2.9.0, 3.0.0, 3.1.0 > > Attachments: YARN-2162.001.patch, YARN-2162.002.patch, > YARN-2162.003.patch, YARN-2162.004.patch, YARN-2162.005.patch, > YARN-2162.006.patch, YARN-2162.007.patch, YARN-2162.008.patch, > YARN-2162.branch-2.010.patch, YARN-2162.branch-3.0.009.patch, > test-400nm-200app-2k_NODE_UPDATE.timecost.svg > > > minResources and maxResources in fair scheduler configs are expressed in > terms of absolute numbers X mb, Y vcores. > As a result, when we expand or shrink our hadoop cluster, we need to > recalculate and change minResources/maxResources accordingly, which is pretty > inconvenient. > We can circumvent this problem if we can optionally configure these > properties in terms of percentage of cluster capacity. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage
[ https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375364#comment-16375364 ] ASF GitHub Bot commented on YARN-2162: -- Github user flyrain closed the pull request at: https://github.com/apache/hadoop/pull/261 > add ability in Fair Scheduler to optionally configure maxResources in terms > of percentage > - > > Key: YARN-2162 > URL: https://issues.apache.org/jira/browse/YARN-2162 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler, scheduler >Reporter: Ashwin Shankar >Assignee: Yufei Gu >Priority: Major > Labels: scheduler > Fix For: 2.9.0, 3.0.0, 3.1.0 > > Attachments: YARN-2162.001.patch, YARN-2162.002.patch, > YARN-2162.003.patch, YARN-2162.004.patch, YARN-2162.005.patch, > YARN-2162.006.patch, YARN-2162.007.patch, YARN-2162.008.patch, > YARN-2162.branch-2.010.patch, YARN-2162.branch-3.0.009.patch, > test-400nm-200app-2k_NODE_UPDATE.timecost.svg > > > minResources and maxResources in fair scheduler configs are expressed in > terms of absolute numbers X mb, Y vcores. > As a result, when we expand or shrink our hadoop cluster, we need to > recalculate and change minResources/maxResources accordingly, which is pretty > inconvenient. > We can circumvent this problem if we can optionally configure these > properties in terms of percentage of cluster capacity. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release
[ https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375328#comment-16375328 ] Vrushali C commented on YARN-7346: -- Thanks [~busbey] for the review and discussion. {quote}bq. Even if this module for HBase 2.0.0 support isn't the default used in trunk, shouldn't we still be testing changes to the module? {quote} We have been trying to explore ways to do two compilations for trunk in jenkins. But so far, what we have as a solution is, to have branch yarn-7055 run compilations for hbase-2.x and keep trunk builds for the default hbase profile which would be based on the latest stable hbase version. Otherwise, the approach in patch 8 looks good to me, pending jenkins. > Fix compilation errors against hbase2 beta release > -- > > Key: YARN-7346 > URL: https://issues.apache.org/jira/browse/YARN-7346 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Ted Yu >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-7346.00.patch, YARN-7346.01.patch, > YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, > YARN-7346.04-incremental.patch, YARN-7346.04.patch, YARN-7346.05.patch, > YARN-7346.06.patch, YARN-7346.07.patch, YARN-7346.08.patch, > YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, YARN-7581.prelim.patch > > > When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, > I got the following errors: > https://pastebin.com/Ms4jYEVB > This issue is to fix the compilation errors. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7921) Transform a PlacementConstraint to a string expression
[ https://issues.apache.org/jira/browse/YARN-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375319#comment-16375319 ] Weiwei Yang commented on YARN-7921: --- Attached placement constraint expression syntax specification, [~kkaranasos], please take a look. Also, please help to review this patch, thanks. > Transform a PlacementConstraint to a string expression > -- > > Key: YARN-7921 > URL: https://issues.apache.org/jira/browse/YARN-7921 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Major > Attachments: Placement Constraint Expression Syntax > Specification.pdf, YARN-7921.001.patch, YARN-7921.002.patch > > > Purpose: > Let placement constraint viewable on UI or log, e.g print app placement > constraint in RM app page. Help user to use constraints and analysis > placement issues easier. > Propose: > Like what was added for DS, toString is a reversed process of > {{PlacementConstraintParser}} that transforms a PlacementConstraint to a > string, using same syntax. E.g > {code} > AbstractConstraint constraintExpr = targetIn(NODE, allocationTag("hbase-m")); > constraint.toString(); > // This prints: IN,NODE,hbase-m > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7921) Transform a PlacementConstraint to a string expression
[ https://issues.apache.org/jira/browse/YARN-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated YARN-7921: -- Attachment: Placement Constraint Expression Syntax Specification.pdf > Transform a PlacementConstraint to a string expression > -- > > Key: YARN-7921 > URL: https://issues.apache.org/jira/browse/YARN-7921 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Major > Attachments: Placement Constraint Expression Syntax > Specification.pdf, YARN-7921.001.patch, YARN-7921.002.patch > > > Purpose: > Let placement constraint viewable on UI or log, e.g print app placement > constraint in RM app page. Help user to use constraints and analysis > placement issues easier. > Propose: > Like what was added for DS, toString is a reversed process of > {{PlacementConstraintParser}} that transforms a PlacementConstraint to a > string, using same syntax. E.g > {code} > AbstractConstraint constraintExpr = targetIn(NODE, allocationTag("hbase-m")); > constraint.toString(); > // This prints: IN,NODE,hbase-m > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7957) Yarn service delete option disappears after stopping application
[ https://issues.apache.org/jira/browse/YARN-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375289#comment-16375289 ] Sunil G commented on YARN-7957: --- I think if we have service state which is similar to YarnApplicationState, it will be better. Because app states are different to a service, and hence such an enum will help us for metrics etc as well other than UI > Yarn service delete option disappears after stopping application > > > Key: YARN-7957 > URL: https://issues.apache.org/jira/browse/YARN-7957 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Affects Versions: 3.1.0 >Reporter: Yesha Vora >Assignee: Sunil G >Priority: Critical > Attachments: YARN-7957.01.patch > > > Steps: > 1) Launch yarn service > 2) Go to service page and click on Setting button->"Stop Service". The > application will be stopped. > 3) Refresh page > Here, setting button disappears. Thus, user can not delete service from UI > after stopping application > Expected behavior: > Setting button should be present on UI page after application is stopped. If > application is stopped, setting button should only have "Delete Service" > action available. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6528) [PERF/TEST] Add JMX metrics for Plan Follower and Agent Placement and Plan Operations
[ https://issues.apache.org/jira/browse/YARN-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375282#comment-16375282 ] genericqa commented on YARN-6528: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 37s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 413 unchanged - 4 fixed = 413 total (was 417) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 15s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 68m 40s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}113m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-6528 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12911854/YARN-6528.v011.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9df8028ffaad 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 329a4fd | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/19800/testReport/ | | Max. process+thread count | 839 (vs. ulimit of 1) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/19800/console | | Powered by | Apache
[jira] [Commented] (YARN-7957) Yarn service delete option disappears after stopping application
[ https://issues.apache.org/jira/browse/YARN-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375277#comment-16375277 ] Gour Saha commented on YARN-7957: - [~sunilg], I see. Now, YarnApplicationState does not have a state which represents DELETED or DESTROYED. So, when a service is destroyed, if YARN Service writes to ATS a string which does not have any enum reference, is that acceptable? Or do we have to introduce this state in some enum? > Yarn service delete option disappears after stopping application > > > Key: YARN-7957 > URL: https://issues.apache.org/jira/browse/YARN-7957 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Affects Versions: 3.1.0 >Reporter: Yesha Vora >Assignee: Sunil G >Priority: Critical > Attachments: YARN-7957.01.patch > > > Steps: > 1) Launch yarn service > 2) Go to service page and click on Setting button->"Stop Service". The > application will be stopped. > 3) Refresh page > Here, setting button disappears. Thus, user can not delete service from UI > after stopping application > Expected behavior: > Setting button should be present on UI page after application is stopped. If > application is stopped, setting button should only have "Delete Service" > action available. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release
[ https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375275#comment-16375275 ] Rohith Sharma K S commented on YARN-7346: - +1 lgtm for 08 patch. Pending Jenkins. [~busbey] do you have any concerns/doubts on the approach we are taking? We greatly appreciate your suggestions/inputs. Otherwise I will proceed ahead for committing this patch this weekend. > Fix compilation errors against hbase2 beta release > -- > > Key: YARN-7346 > URL: https://issues.apache.org/jira/browse/YARN-7346 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Ted Yu >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-7346.00.patch, YARN-7346.01.patch, > YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, > YARN-7346.04-incremental.patch, YARN-7346.04.patch, YARN-7346.05.patch, > YARN-7346.06.patch, YARN-7346.07.patch, YARN-7346.08.patch, > YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, YARN-7581.prelim.patch > > > When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, > I got the following errors: > https://pastebin.com/Ms4jYEVB > This issue is to fix the compilation errors. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7446) Docker container privileged mode and --user flag contradict each other
[ https://issues.apache.org/jira/browse/YARN-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375269#comment-16375269 ] genericqa commented on YARN-7446: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 29m 5s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 52s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 24s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 63m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7446 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12911855/YARN-7446.004.patch | | Optional Tests | asflicense compile cc mvnsite javac unit | | uname | Linux 33aa72a2eaa2 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 329a4fd | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/19801/testReport/ | | Max. process+thread count | 303 (vs. ulimit of 1) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/19801/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Docker container privileged mode and --user flag contradict each other > -- > > Key: YARN-7446 > URL: https://issues.apache.org/jira/browse/YARN-7446 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Eric Yang >Assignee: Eric Yang >Priority: Major > Attachments: YARN-7446.001.patch, YARN-7446.002.patch, > YARN-7446.003.patch, YARN-7446.004.patch > > > In the current implementation, when privileged=true, --user flag is also > passed to docker for launching container. In reality, the container has no > way to use root privileges unless there is sticky bit or sudoers in the image > for the
[jira] [Updated] (YARN-7346) Fix compilation errors against hbase2 beta release
[ https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-7346: - Attachment: YARN-7346.08.patch > Fix compilation errors against hbase2 beta release > -- > > Key: YARN-7346 > URL: https://issues.apache.org/jira/browse/YARN-7346 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Ted Yu >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-7346.00.patch, YARN-7346.01.patch, > YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, > YARN-7346.04-incremental.patch, YARN-7346.04.patch, YARN-7346.05.patch, > YARN-7346.06.patch, YARN-7346.07.patch, YARN-7346.08.patch, > YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, YARN-7581.prelim.patch > > > When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, > I got the following errors: > https://pastebin.com/Ms4jYEVB > This issue is to fix the compilation errors. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7446) Docker container privileged mode and --user flag contradict each other
[ https://issues.apache.org/jira/browse/YARN-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Yang updated YARN-7446: Attachment: YARN-7446.004.patch > Docker container privileged mode and --user flag contradict each other > -- > > Key: YARN-7446 > URL: https://issues.apache.org/jira/browse/YARN-7446 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Eric Yang >Assignee: Eric Yang >Priority: Major > Attachments: YARN-7446.001.patch, YARN-7446.002.patch, > YARN-7446.003.patch, YARN-7446.004.patch > > > In the current implementation, when privileged=true, --user flag is also > passed to docker for launching container. In reality, the container has no > way to use root privileges unless there is sticky bit or sudoers in the image > for the specified user to gain privileges again. To avoid duplication of > dropping and reacquire root privileges, we can reduce the duplication of > specifying both flag. When privileged mode is enabled, --user flag should be > omitted. When non-privileged mode is enabled, --user flag is supplied. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7446) Docker container privileged mode and --user flag contradict each other
[ https://issues.apache.org/jira/browse/YARN-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375228#comment-16375228 ] Eric Yang commented on YARN-7446: - [~ebadger] Thank you for the review, but I can't move the free to end of the function for both free statements in this patch because there are other return conditions that could happen before end of the function. I updated the patch to use strcasecmp. > Docker container privileged mode and --user flag contradict each other > -- > > Key: YARN-7446 > URL: https://issues.apache.org/jira/browse/YARN-7446 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Eric Yang >Assignee: Eric Yang >Priority: Major > Attachments: YARN-7446.001.patch, YARN-7446.002.patch, > YARN-7446.003.patch > > > In the current implementation, when privileged=true, --user flag is also > passed to docker for launching container. In reality, the container has no > way to use root privileges unless there is sticky bit or sudoers in the image > for the specified user to gain privileges again. To avoid duplication of > dropping and reacquire root privileges, we can reduce the duplication of > specifying both flag. When privileged mode is enabled, --user flag should be > omitted. When non-privileged mode is enabled, --user flag is supplied. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-7929) SLS supports setting container execution
[ https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374021#comment-16374021 ] Jiandan Yang edited comment on YARN-7929 at 2/24/18 1:50 AM: -- Hi [~youchen], thanks for your attention. I did encounter the issue of merging failed when I pull latest code in my local develop environment. I will upload a new patch based on latest code. "water level" to the NMSimulator simulates actual resource utilization, the scheduling of OPPORTUNISTIC containers through the central RM need actual node utilization according to design doc in YARN-1011. was (Author: yangjiandan): Hi [~yochen], thanks for your attention. I did encounter the issue of merging failed when I pull latest code in my local develop environment. I will upload a new patch based on latest code. "water level" to the NMSimulator simulates actual resource utilization, the scheduling of OPPORTUNISTIC containers through the central RM need actual node utilization according to design doc in YARN-1011. > SLS supports setting container execution > > > Key: YARN-7929 > URL: https://issues.apache.org/jira/browse/YARN-7929 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Reporter: Jiandan Yang >Assignee: Jiandan Yang >Priority: Major > Attachments: YARN-7929.001.patch, YARN-7929.002.patch > > > SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file > can not set execution type of container. > This jira will introduce execution type in SLS to help better simulation. > This will help the perf testing with regarding to the Opportunistic > Containers. > RUMEN has default execution type GUARANTEED > SYNTH set execution type by field map_execution_type and > reduce_execution_type > SLS set execution type by field container.execution_type > For compatibility set GUARANTEED as default value when not setting above > fields in trace file -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6528) [PERF/TEST] Add JMX metrics for Plan Follower and Agent Placement and Plan Operations
[ https://issues.apache.org/jira/browse/YARN-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaohua (Victor) Liang updated YARN-6528: - Attachment: YARN-6528.v011.patch > [PERF/TEST] Add JMX metrics for Plan Follower and Agent Placement and Plan > Operations > - > > Key: YARN-6528 > URL: https://issues.apache.org/jira/browse/YARN-6528 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sean Po >Assignee: Xiaohua (Victor) Liang >Priority: Major > Attachments: YARN-6528.v001.patch, YARN-6528.v002.patch, > YARN-6528.v003.patch, YARN-6528.v004.patch, YARN-6528.v005.patch, > YARN-6528.v006.patch, YARN-6528.v007.patch, YARN-6528.v008.patch, > YARN-6528.v009.patch, YARN-6528.v010.patch, YARN-6528.v011.patch > > > YARN-1051 introduced a ReservationSytem that enables the YARN RM to handle > time explicitly, i.e. users can now "reserve" capacity ahead of time which is > predictably allocated to them. In order to understand in finer detail the > performance of Rayon, YARN-6528 proposes to include JMX metrics in the Plan > Follower, Agent Placement and Plan Operations components of Rayon. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7962) Race Condition When Stopping DelegationTokenRenewer
[ https://issues.apache.org/jira/browse/YARN-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375133#comment-16375133 ] genericqa commented on YARN-7962: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 38s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 46s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 53s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}120m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector | | | hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7962 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12911807/YARN-7962.1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b809dea20cc0 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 51088d3 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/19799/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/19799/testReport/ | | Max.
[jira] [Created] (YARN-7968) Reset the queue name in submission context while recovering an application
Yufei Gu created YARN-7968: -- Summary: Reset the queue name in submission context while recovering an application Key: YARN-7968 URL: https://issues.apache.org/jira/browse/YARN-7968 Project: Hadoop YARN Issue Type: Improvement Components: fairscheduler Affects Versions: 3.1.0 Reporter: Yufei Gu Assignee: Yufei Gu After YARN-7139, the new application can get correct queue name in its submission context. We need to do the same thing for application recovering. {code} if (isAppRecovering) { if (LOG.isDebugEnabled()) { LOG.debug(applicationId + " is recovering. Skip notifying APP_ACCEPTED"); } } else { // During tests we do not always have an application object, handle // it here but we probably should fix the tests if (rmApp != null && rmApp.getApplicationSubmissionContext() != null) { // Before we send out the event that the app is accepted is // to set the queue in the submissionContext (needed on restore etc) rmApp.getApplicationSubmissionContext().setQueue(queue.getName()); } rmContext.getDispatcher().getEventHandler().handle( new RMAppEvent(applicationId, RMAppEventType.APP_ACCEPTED)); } {code} We can do it by move the {{rmApp.getApplicationSubmissionContext().setQueue}} block out of the if-else block. cc [~wilfreds]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7968) Reset the queue name in submission context while recovering an application
[ https://issues.apache.org/jira/browse/YARN-7968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-7968: --- Description: After YARN-7139, the new application can get correct queue name in its submission context. We need to do the same thing for application recovering. {code} if (isAppRecovering) { if (LOG.isDebugEnabled()) { LOG.debug(applicationId + " is recovering. Skip notifying APP_ACCEPTED"); } } else { // During tests we do not always have an application object, handle // it here but we probably should fix the tests if (rmApp != null && rmApp.getApplicationSubmissionContext() != null) { // Before we send out the event that the app is accepted is // to set the queue in the submissionContext (needed on restore etc) rmApp.getApplicationSubmissionContext().setQueue(queue.getName()); } rmContext.getDispatcher().getEventHandler().handle( new RMAppEvent(applicationId, RMAppEventType.APP_ACCEPTED)); } {code} We can do it by moving the {{rmApp.getApplicationSubmissionContext().setQueue}} block out of the if-else block. cc [~wilfreds]. was: After YARN-7139, the new application can get correct queue name in its submission context. We need to do the same thing for application recovering. {code} if (isAppRecovering) { if (LOG.isDebugEnabled()) { LOG.debug(applicationId + " is recovering. Skip notifying APP_ACCEPTED"); } } else { // During tests we do not always have an application object, handle // it here but we probably should fix the tests if (rmApp != null && rmApp.getApplicationSubmissionContext() != null) { // Before we send out the event that the app is accepted is // to set the queue in the submissionContext (needed on restore etc) rmApp.getApplicationSubmissionContext().setQueue(queue.getName()); } rmContext.getDispatcher().getEventHandler().handle( new RMAppEvent(applicationId, RMAppEventType.APP_ACCEPTED)); } {code} We can do it by move the {{rmApp.getApplicationSubmissionContext().setQueue}} block out of the if-else block. cc [~wilfreds]. > Reset the queue name in submission context while recovering an application > -- > > Key: YARN-7968 > URL: https://issues.apache.org/jira/browse/YARN-7968 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Affects Versions: 3.1.0 >Reporter: Yufei Gu >Assignee: Yufei Gu >Priority: Major > > After YARN-7139, the new application can get correct queue name in its > submission context. We need to do the same thing for application recovering. > {code} > if (isAppRecovering) { > if (LOG.isDebugEnabled()) { > LOG.debug(applicationId > + " is recovering. Skip notifying APP_ACCEPTED"); > } > } else { > // During tests we do not always have an application object, handle > // it here but we probably should fix the tests > if (rmApp != null && rmApp.getApplicationSubmissionContext() != null) > { > // Before we send out the event that the app is accepted is > // to set the queue in the submissionContext (needed on restore etc) > rmApp.getApplicationSubmissionContext().setQueue(queue.getName()); > } > rmContext.getDispatcher().getEventHandler().handle( > new RMAppEvent(applicationId, RMAppEventType.APP_ACCEPTED)); > } > {code} > We can do it by moving the > {{rmApp.getApplicationSubmissionContext().setQueue}} block out of the if-else > block. cc [~wilfreds]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7446) Docker container privileged mode and --user flag contradict each other
[ https://issues.apache.org/jira/browse/YARN-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375082#comment-16375082 ] Eric Badger commented on YARN-7446: --- Hey [~eyang], just a few minor things on this patch. nit: To be consistent with the rest of the code, we should move the frees that aren't in the if statements to the end of the functions, just before the return. {noformat} + if (privileged == NULL || strcmp(privileged, "false") == 0) { {noformat} This should use {{strcasecmp}} to be consistent with the other privilege checks. > Docker container privileged mode and --user flag contradict each other > -- > > Key: YARN-7446 > URL: https://issues.apache.org/jira/browse/YARN-7446 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Eric Yang >Assignee: Eric Yang >Priority: Major > Attachments: YARN-7446.001.patch, YARN-7446.002.patch, > YARN-7446.003.patch > > > In the current implementation, when privileged=true, --user flag is also > passed to docker for launching container. In reality, the container has no > way to use root privileges unless there is sticky bit or sudoers in the image > for the specified user to gain privileges again. To avoid duplication of > dropping and reacquire root privileges, we can reduce the duplication of > specifying both flag. When privileged mode is enabled, --user flag should be > omitted. When non-privileged mode is enabled, --user flag is supplied. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7967) Better doc and Java doc for Fair Scheduler Queue ACL
Yufei Gu created YARN-7967: -- Summary: Better doc and Java doc for Fair Scheduler Queue ACL Key: YARN-7967 URL: https://issues.apache.org/jira/browse/YARN-7967 Project: Hadoop YARN Issue Type: Improvement Components: fairscheduler Affects Versions: 3.1.0 Reporter: Yufei Gu Wilfred mentioned that: {quote} Queue ACLs work bottom up. The first check is made at the leaf queue (in the example above q2). If the ACL at that level does not allow access we go up one level (here q1) and check that level. That process is repeated until we hit the top in the form of the root queue. If at any level we are allowed to submit or administer then the action is allowed and we stop reversing up the tree. {quote} Bottom up is a surprise behavior. Most queue related checking works top down. We should add this into Fair Scheduler Doc. In addition, maybe link the doc to Java Doc of AllocationFileLoaderService#getDefaultPermissions so that it won't be too confusing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7967) Better doc and Java doc for Fair Scheduler Queue ACL
[ https://issues.apache.org/jira/browse/YARN-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-7967: --- Description: [~wilfreds] mentioned that: {quote} Queue ACLs work bottom up. The first check is made at the leaf queue. If the ACL at that level does not allow access we go up one level and check that level. That process is repeated until we hit the top in the form of the root queue. If at any level we are allowed to submit or administer then the action is allowed and we stop reversing up the tree. {quote} Bottom up is a surprise behavior. Most queue related checking works top down. We should add this into Fair Scheduler Doc. In addition, maybe link the doc to Java Doc of AllocationFileLoaderService#getDefaultPermissions so that it won't be too confusing. was: Wilfred mentioned that: {quote} Queue ACLs work bottom up. The first check is made at the leaf queue (in the example above q2). If the ACL at that level does not allow access we go up one level (here q1) and check that level. That process is repeated until we hit the top in the form of the root queue. If at any level we are allowed to submit or administer then the action is allowed and we stop reversing up the tree. {quote} Bottom up is a surprise behavior. Most queue related checking works top down. We should add this into Fair Scheduler Doc. In addition, maybe link the doc to Java Doc of AllocationFileLoaderService#getDefaultPermissions so that it won't be too confusing. > Better doc and Java doc for Fair Scheduler Queue ACL > > > Key: YARN-7967 > URL: https://issues.apache.org/jira/browse/YARN-7967 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Affects Versions: 3.1.0 >Reporter: Yufei Gu >Priority: Major > Labels: newbie > > [~wilfreds] mentioned that: > {quote} > Queue ACLs work bottom up. The first check is made at the leaf queue. If the > ACL at that level does not allow access we go up one level and check that > level. That process is repeated until we hit the top in the form of the root > queue. If at any level we are allowed to submit or administer then the action > is allowed and we stop reversing up the tree. > {quote} > Bottom up is a surprise behavior. Most queue related checking works top down. > We should add this into Fair Scheduler Doc. In addition, maybe link the doc > to Java Doc of AllocationFileLoaderService#getDefaultPermissions so that it > won't be too confusing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7966) Remove AllocationConfiguration#getQueueAcl and related unit test
Yufei Gu created YARN-7966: -- Summary: Remove AllocationConfiguration#getQueueAcl and related unit test Key: YARN-7966 URL: https://issues.apache.org/jira/browse/YARN-7966 Project: Hadoop YARN Issue Type: Improvement Components: fairscheduler Affects Versions: 3.1.0 Reporter: Yufei Gu AllocationConfiguration#getQueueAcl isn't needed any more after YARN-4997. We should remove it and its related unit test. All its logic is reimplemented in AllocationFileLoaderService#getDefaultPermissions. class AllocationConfiguration doesn't have any API annotation, it is considered as private. So it is OK to remove its public method. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-7637) GPU volume creation command fails when work preserving is disabled at NM
[ https://issues.apache.org/jira/browse/YARN-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375030#comment-16375030 ] Zian Chen edited comment on YARN-7637 at 2/23/18 10:18 PM: --- Thank you, [~sunilg] . Hi [~leftnoteasy] , could you share your thoughts on the patch as well? Jenkins passed except a minor code style issue which should be easy to fix. Thanks was (Author: zian chen): Thank you, [~sunilg] . Hi Wangda, could you share your thoughts on the patch as well? Jenkins passed except a minor code style issue which should be easy to fix. Thanks > GPU volume creation command fails when work preserving is disabled at NM > > > Key: YARN-7637 > URL: https://issues.apache.org/jira/browse/YARN-7637 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Affects Versions: 3.1.0 >Reporter: Sunil G >Assignee: Zian Chen >Priority: Critical > Attachments: YARN-7637.001.patch > > > When work preserving is disabled, NM uses {{NMNullStateStoreService}}. Hence > resource mappings related to GPU wont be saved at Container. > This has to be rechecked and store accordingly. > cc/ [~leftnoteasy] and [~Zian Chen] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7637) GPU volume creation command fails when work preserving is disabled at NM
[ https://issues.apache.org/jira/browse/YARN-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375030#comment-16375030 ] Zian Chen commented on YARN-7637: - Thank you, [~sunilg] . Hi Wangda, could you share your thoughts on the patch as well? Jenkins passed except a minor code style issue which should be easy to fix. Thanks > GPU volume creation command fails when work preserving is disabled at NM > > > Key: YARN-7637 > URL: https://issues.apache.org/jira/browse/YARN-7637 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Affects Versions: 3.1.0 >Reporter: Sunil G >Assignee: Zian Chen >Priority: Critical > Attachments: YARN-7637.001.patch > > > When work preserving is disabled, NM uses {{NMNullStateStoreService}}. Hence > resource mappings related to GPU wont be saved at Container. > This has to be rechecked and store accordingly. > cc/ [~leftnoteasy] and [~Zian Chen] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5714) ContainerExecutor does not order environment map
[ https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375022#comment-16375022 ] Hudson commented on YARN-5714: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13708 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13708/]) YARN-5714. ContainerExecutor does not order environment map. Contributed (jlowe: rev 8e728f39c961f034369b43e087d68d01aa4a0e7d) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java > ContainerExecutor does not order environment map > > > Key: YARN-5714 > URL: https://issues.apache.org/jira/browse/YARN-5714 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 2.4.1, 2.5.2, 2.7.3, 2.6.4, 3.0.0-alpha1 > Environment: all (linux and windows alike) >Reporter: Remi Catherinot >Assignee: Remi Catherinot >Priority: Trivial > Labels: oct16-medium > Fix For: 3.1.0 > > Attachments: YARN-5714.001.patch, YARN-5714.002.patch, > YARN-5714.003.patch, YARN-5714.004.patch, YARN-5714.005.patch, > YARN-5714.006.patch, YARN-5714.007.patch, YARN-5714.008.patch, > YARN-5714.009.patch > > Original Estimate: 120h > Remaining Estimate: 120h > > when dumping the launch container script, environment variables are dumped > based on the order internally used by the map implementation (hash based). It > does not take into consideration that some env varibales may refer each > other, and so that some env variables must be declared before those > referencing them. > In my case, i ended up having LD_LIBRARY_PATH which was depending on > HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a > wrong value and so native libraries weren't loaded. jobs were running but not > at their best efficiency. This is just a use case falling into that bug, but > i'm sure others may happen as well. > I already have a patch running in my production environment, i just estimate > to 5 days for packaging the patch in the right fashion for JIRA + try my best > to add tests. > Note : the patch is not OS aware with a default empty implementation. I will > only implement the unix version on a 1st release. I'm not used to windows env > variables syntax so it will take me more time/research for it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release
[ https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375004#comment-16375004 ] genericqa commented on YARN-7346: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 33s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-assemblies hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 10s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 41s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-assemblies hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 16s{color} | {color:red} patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-2 no findbugs output file (hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-2/target/findbugsXml.xml) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 19s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-server generated 107 new + 0 unchanged - 0 fixed = 107 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} |
[jira] [Commented] (YARN-7962) Race Condition When Stopping DelegationTokenRenewer
[ https://issues.apache.org/jira/browse/YARN-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374999#comment-16374999 ] BELUGA BEHR commented on YARN-7962: --- Also tighten things up a little (make start and stop symmetrical) when it comes to blocking. > Race Condition When Stopping DelegationTokenRenewer > --- > > Key: YARN-7962 > URL: https://issues.apache.org/jira/browse/YARN-7962 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Priority: Minor > Attachments: YARN-7962.1.patch > > > [https://github.com/apache/hadoop/blob/69fa81679f59378fd19a2c65db8019393d7c05a2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java] > {code:java} > private ThreadPoolExecutor renewerService; > private void processDelegationTokenRenewerEvent( > DelegationTokenRenewerEvent evt) { > serviceStateLock.readLock().lock(); > try { > if (isServiceStarted) { > renewerService.execute(new DelegationTokenRenewerRunnable(evt)); > } else { > pendingEventQueue.add(evt); > } > } finally { > serviceStateLock.readLock().unlock(); > } > } > @Override > protected void serviceStop() { > if (renewalTimer != null) { > renewalTimer.cancel(); > } > appTokens.clear(); > allTokens.clear(); > this.renewerService.shutdown(); > {code} > {code:java} > 2018-02-21 11:18:16,253 FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: > Error in dispatcher thread > java.util.concurrent.RejectedExecutionException: Task > org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable@39bddaf2 > rejected from java.util.concurrent.ThreadPoolExecutor@5f71637b[Terminated, > pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 15487] > at > java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) > at > java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372) > at > org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.processDelegationTokenRenewerEvent(DelegationTokenRenewer.java:196) > at > org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.applicationFinished(DelegationTokenRenewer.java:734) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.finishApplication(RMAppManager.java:199) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.handle(RMAppManager.java:424) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.handle(RMAppManager.java:65) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:177) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109) > at java.lang.Thread.run(Thread.java:745) > {code} > What I think is going on here is that the {{serviceStop}} method is not > setting the {{isServiceStarted}} flag to 'false'. > Please update so that the {{serviceStop}} method grabs the > {{serviceStateLock}} and sets {{isServiceStarted}} to _false_, before > shutting down the {{renewerService}} thread pool, to avoid this condition. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7962) Race Condition When Stopping DelegationTokenRenewer
[ https://issues.apache.org/jira/browse/YARN-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated YARN-7962: -- Attachment: YARN-7962.1.patch > Race Condition When Stopping DelegationTokenRenewer > --- > > Key: YARN-7962 > URL: https://issues.apache.org/jira/browse/YARN-7962 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Priority: Minor > Attachments: YARN-7962.1.patch > > > [https://github.com/apache/hadoop/blob/69fa81679f59378fd19a2c65db8019393d7c05a2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java] > {code:java} > private ThreadPoolExecutor renewerService; > private void processDelegationTokenRenewerEvent( > DelegationTokenRenewerEvent evt) { > serviceStateLock.readLock().lock(); > try { > if (isServiceStarted) { > renewerService.execute(new DelegationTokenRenewerRunnable(evt)); > } else { > pendingEventQueue.add(evt); > } > } finally { > serviceStateLock.readLock().unlock(); > } > } > @Override > protected void serviceStop() { > if (renewalTimer != null) { > renewalTimer.cancel(); > } > appTokens.clear(); > allTokens.clear(); > this.renewerService.shutdown(); > {code} > {code:java} > 2018-02-21 11:18:16,253 FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: > Error in dispatcher thread > java.util.concurrent.RejectedExecutionException: Task > org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable@39bddaf2 > rejected from java.util.concurrent.ThreadPoolExecutor@5f71637b[Terminated, > pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 15487] > at > java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) > at > java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372) > at > org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.processDelegationTokenRenewerEvent(DelegationTokenRenewer.java:196) > at > org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.applicationFinished(DelegationTokenRenewer.java:734) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.finishApplication(RMAppManager.java:199) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.handle(RMAppManager.java:424) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.handle(RMAppManager.java:65) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:177) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109) > at java.lang.Thread.run(Thread.java:745) > {code} > What I think is going on here is that the {{serviceStop}} method is not > setting the {{isServiceStarted}} flag to 'false'. > Please update so that the {{serviceStop}} method grabs the > {{serviceStateLock}} and sets {{isServiceStarted}} to _false_, before > shutting down the {{renewerService}} thread pool, to avoid this condition. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7346) Fix compilation errors against hbase2 beta release
[ https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-7346: - Attachment: YARN-7346.07.patch > Fix compilation errors against hbase2 beta release > -- > > Key: YARN-7346 > URL: https://issues.apache.org/jira/browse/YARN-7346 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Ted Yu >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-7346.00.patch, YARN-7346.01.patch, > YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, > YARN-7346.04-incremental.patch, YARN-7346.04.patch, YARN-7346.05.patch, > YARN-7346.06.patch, YARN-7346.07.patch, YARN-7346.prelim1.patch, > YARN-7346.prelim2.patch, YARN-7581.prelim.patch > > > When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, > I got the following errors: > https://pastebin.com/Ms4jYEVB > This issue is to fix the compilation errors. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7963) TestServiceAM and TestServiceMonitor test cases are hanging
[ https://issues.apache.org/jira/browse/YARN-7963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374902#comment-16374902 ] Eric Yang commented on YARN-7963: - The unit test is based on ServiceMaster which extends CompositeService. When composite service is started, it triggers certain information to be sent to resource manager, but there is no resource manager exists for the unit test. Is there a way to disable this new behavior? > TestServiceAM and TestServiceMonitor test cases are hanging > --- > > Key: YARN-7963 > URL: https://issues.apache.org/jira/browse/YARN-7963 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-native-services >Affects Versions: 3.1.0 >Reporter: Eric Yang >Priority: Major > > There is a regression when merge YARN-6592 that prevents YARN services test > cases from working. The unit tests hang on contacting resource manager at > port 8030. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7963) TestServiceAM and TestServiceMonitor test cases are hanging
[ https://issues.apache.org/jira/browse/YARN-7963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Yang updated YARN-7963: Description: There is a regression when merge YARN-6592 that prevents YARN services test cases from working. The unit tests hang on contacting resource manager at port 8030. (was: There is a regression when merge YARN-6592 that prevents YARN services test cases from working. The unit tests hang on contacting embedded ZooKeepers.) > TestServiceAM and TestServiceMonitor test cases are hanging > --- > > Key: YARN-7963 > URL: https://issues.apache.org/jira/browse/YARN-7963 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-native-services >Affects Versions: 3.1.0 >Reporter: Eric Yang >Priority: Major > > There is a regression when merge YARN-6592 that prevents YARN services test > cases from working. The unit tests hang on contacting resource manager at > port 8030. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7935) Expose container's hostname to applications running within the docker container
[ https://issues.apache.org/jira/browse/YARN-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374598#comment-16374598 ] Thomas Graves commented on YARN-7935: - thanks for the explanation Mridul. I'm fine with waiting on the spark Jira til you know the scope better, I'm currently not doing anything with bridge mode so won't be able to help there at this point. > Expose container's hostname to applications running within the docker > container > --- > > Key: YARN-7935 > URL: https://issues.apache.org/jira/browse/YARN-7935 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Suma Shivaprasad >Assignee: Suma Shivaprasad >Priority: Major > Attachments: YARN-7935.1.patch, YARN-7935.2.patch > > > Some applications have a need to bind to the container's hostname (like > Spark) which is different from the NodeManager's hostname(NM_HOST which is > available as an env during container launch) when launched through Docker > runtime. The container's hostname can be exposed to applications via an env > CONTAINER_HOSTNAME. Another potential candidate is the container's IP but > this can be addressed in a separate jira. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5151) [YARN-3368] Support kill application from new YARN UI
[ https://issues.apache.org/jira/browse/YARN-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374563#comment-16374563 ] Sunil G commented on YARN-5151: --- There are some related changes happening in YARN-7957, so we are discussing on conditions where Stop/Delete service to display. Kill is needed for apps/services. I ll take a close look on how we can show these under correct scenarios and update here > [YARN-3368] Support kill application from new YARN UI > - > > Key: YARN-5151 > URL: https://issues.apache.org/jira/browse/YARN-5151 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Gergely Novák >Priority: Major > Attachments: YARN-5151.001.patch, YARN-5151.002.patch, > YARN-5151.003.patch, YARN-5151.004.patch, YARN-5151.005.patch, > screenshot-1.png, screenshot-2.png > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7957) Yarn service delete option disappears after stopping application
[ https://issues.apache.org/jira/browse/YARN-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374561#comment-16374561 ] Sunil G commented on YARN-7957: --- It seems GET api to check service status was never used in UI. UI was always depending on ATS to get teh status. [~gsaha] are we publishing this details (state as STOPPED or DELETED) to ATS so that UI can read from there.? > Yarn service delete option disappears after stopping application > > > Key: YARN-7957 > URL: https://issues.apache.org/jira/browse/YARN-7957 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Affects Versions: 3.1.0 >Reporter: Yesha Vora >Assignee: Sunil G >Priority: Critical > Attachments: YARN-7957.01.patch > > > Steps: > 1) Launch yarn service > 2) Go to service page and click on Setting button->"Stop Service". The > application will be stopped. > 3) Refresh page > Here, setting button disappears. Thus, user can not delete service from UI > after stopping application > Expected behavior: > Setting button should be present on UI page after application is stopped. If > application is stopped, setting button should only have "Delete Service" > action available. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release
[ https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374520#comment-16374520 ] Haibo Chen commented on YARN-7346: -- Totally agree we should test changes to the hbase 2.0.0 module in trunk, [~busbey]. However, since we don't pull in hbase 2.0 dependencies in trunk (default is 1.2.6), the hadoop-yarn-server-timelineservice-hbase-server-2 module won't compile and generate .class files that findbugs can consume. That's why we are trying to skip findbugs and javadoc. We are open to any suggestion that would solve the problem (two versions of code for two versions of HBase). So far, what we've come up with is, we create a default profile that enables the HBase 1.2.6 version and skip the 2.0.0 version, and flip it the other way around in another branch that we maintain in parallel to trunk. Any suggestion/alternative is greatly appreciated! > Fix compilation errors against hbase2 beta release > -- > > Key: YARN-7346 > URL: https://issues.apache.org/jira/browse/YARN-7346 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Ted Yu >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-7346.00.patch, YARN-7346.01.patch, > YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, > YARN-7346.04-incremental.patch, YARN-7346.04.patch, YARN-7346.05.patch, > YARN-7346.06.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, > YARN-7581.prelim.patch > > > When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, > I got the following errors: > https://pastebin.com/Ms4jYEVB > This issue is to fix the compilation errors. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-7346) Fix compilation errors against hbase2 beta release
[ https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374508#comment-16374508 ] Rohith Sharma K S edited comment on YARN-7346 at 2/23/18 3:41 PM: -- bq. Even if this module for HBase 2.0.0 support isn't the default used in trunk, shouldn't we still be testing changes to the module? This is the basic conditional compilation modules gap. For this, in last ATSv2 community call decided to use YARN-7055 branch as HBase-2.0 default module. And all the patch going into trunk should also be verified for the branch YARN-7055 as well that confirms that nothing break for HBase-2 module. Otherwise we can't proceed ahead with any decisions ending with this Jenkins issue. There are many other issue to be addressed for stability. Umbrella YARN-7213 tracks it for HBase-2.0 which need to be addressed. This is the first patch with conditional compilation modules to be committed. We ensures that default compilation, packaging, findbug, checkstyle and others doesn't break anything for HBase-1 module which is activated by default. was (Author: rohithsharma): bq. Even if this module for HBase 2.0.0 support isn't the default used in trunk, shouldn't we still be testing changes to the module? This is the basic conditional compilation modules gap. For this, in last ATSv2 community call decided to use YARN-7055 branch as HBase-2.0 default module. And all the patch going into trunk should also be verified for the branch YARN-7055 as well that confirms that nothing break for HBase-2 module. Otherwise we can't proceed ahead with any decisions ending with this Jenkins issue. There are many other issue to be addressed for stability. Umbrella YARN-7213 tracks it for HBase-2.0 which need to be addressed. This is the first patch with conditional compilation modules to be committed. We ensures that default compilation, packaging, findbug, checkstyle and others doesn't break anything from previous ones. > Fix compilation errors against hbase2 beta release > -- > > Key: YARN-7346 > URL: https://issues.apache.org/jira/browse/YARN-7346 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Ted Yu >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-7346.00.patch, YARN-7346.01.patch, > YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, > YARN-7346.04-incremental.patch, YARN-7346.04.patch, YARN-7346.05.patch, > YARN-7346.06.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, > YARN-7581.prelim.patch > > > When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, > I got the following errors: > https://pastebin.com/Ms4jYEVB > This issue is to fix the compilation errors. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-7346) Fix compilation errors against hbase2 beta release
[ https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374508#comment-16374508 ] Rohith Sharma K S edited comment on YARN-7346 at 2/23/18 3:40 PM: -- bq. Even if this module for HBase 2.0.0 support isn't the default used in trunk, shouldn't we still be testing changes to the module? This is the basic conditional compilation modules gap. For this, in last ATSv2 community call decided to use YARN-7055 branch as HBase-2.0 default module. And all the patch going into trunk should also be verified for the branch YARN-7055 as well that confirms that nothing break for HBase-2 module. Otherwise we can't proceed ahead with any decisions ending with this Jenkins issue. There are many other issue to be addressed for stability. Umbrella YARN-7213 tracks it for HBase-2.0 which need to be addressed. This is the first patch with conditional compilation modules to be committed. We ensures that default compilation, packaging, findbug, checkstyle and others doesn't break anything from previous ones. was (Author: rohithsharma): bq. Even if this module for HBase 2.0.0 support isn't the default used in trunk, shouldn't we still be testing changes to the module? This is the basic conditional compilation modules gap. For this, in last ATSv2 community call decided to use YARN-7055 branch as HBase-2.0 default module. And all the patch going into trunk should also be verified for the branch YARN-7055 as well that confirms that nothing break for HBase-2 module. Otherwise we can't proceed ahead with any decisions ending with this Jenkins issue. There are many other issue to be addressed for stability. Umbrella YARN-7213 tracks it for HBase-2.0 which need to be addressed. This is the fist patch with conditional compilation modules to be committed. We ensures that default compilation, packaging, findbug, checkstyle and others doesn't break anything from previous ones. > Fix compilation errors against hbase2 beta release > -- > > Key: YARN-7346 > URL: https://issues.apache.org/jira/browse/YARN-7346 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Ted Yu >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-7346.00.patch, YARN-7346.01.patch, > YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, > YARN-7346.04-incremental.patch, YARN-7346.04.patch, YARN-7346.05.patch, > YARN-7346.06.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, > YARN-7581.prelim.patch > > > When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, > I got the following errors: > https://pastebin.com/Ms4jYEVB > This issue is to fix the compilation errors. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release
[ https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374508#comment-16374508 ] Rohith Sharma K S commented on YARN-7346: - bq. Even if this module for HBase 2.0.0 support isn't the default used in trunk, shouldn't we still be testing changes to the module? This is the basic conditional compilation modules gap. For this, in last ATSv2 community call decided to use YARN-7055 branch as HBase-2.0 default module. And all the patch going into trunk should also be verified for the branch YARN-7055 as well that confirms that nothing break for HBase-2 module. Otherwise we can't proceed ahead with any decisions ending with this Jenkins issue. There are many other issue to be addressed for stability. Umbrella YARN-7213 tracks it for HBase-2.0 which need to be addressed. This is the fist patch with conditional compilation modules to be committed. We ensures that default compilation, packaging, findbug, checkstyle and others doesn't break anything from previous ones. > Fix compilation errors against hbase2 beta release > -- > > Key: YARN-7346 > URL: https://issues.apache.org/jira/browse/YARN-7346 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Ted Yu >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-7346.00.patch, YARN-7346.01.patch, > YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, > YARN-7346.04-incremental.patch, YARN-7346.04.patch, YARN-7346.05.patch, > YARN-7346.06.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, > YARN-7581.prelim.patch > > > When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, > I got the following errors: > https://pastebin.com/Ms4jYEVB > This issue is to fix the compilation errors. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release
[ https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374477#comment-16374477 ] Sean Busbey commented on YARN-7346: --- {quote} Binding maven-compiler-plugin to phase 'none' is working, but not the other two plugins. Hence, the findbugs and javadoc issues. {quote} This isn't working because those plugins are being expressly called, e.g. {{/usr/bin/mvn -Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-trunk-patch-0 test-compile findbugs:findbugs -DskipTests=true}} {quote} In this case, we are doing conditional compilation of ATSv2 with HBase (1.2.6 and 2.0.0). HBase 1.2.6 is the default in trunk, so we want to disable the hadoop-yarn-server-timelineservice-hbase-server-2 module which includes code written for 2.0.0. {quote} Even if this module for HBase 2.0.0 support isn't the default used in trunk, shouldn't we still be testing changes to the module? > Fix compilation errors against hbase2 beta release > -- > > Key: YARN-7346 > URL: https://issues.apache.org/jira/browse/YARN-7346 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Ted Yu >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-7346.00.patch, YARN-7346.01.patch, > YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, > YARN-7346.04-incremental.patch, YARN-7346.04.patch, YARN-7346.05.patch, > YARN-7346.06.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, > YARN-7581.prelim.patch > > > When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, > I got the following errors: > https://pastebin.com/Ms4jYEVB > This issue is to fix the compilation errors. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release
[ https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374458#comment-16374458 ] Haibo Chen commented on YARN-7346: -- Thanks a lot for your comments [~busbey]! In this case, we are doing conditional compilation of ATSv2 with HBase (1.2.6 and 2.0.0). HBase 1.2.6 is the default in trunk, so we want to disable the hadoop-yarn-server-timelineservice-hbase-server-2 module which includes code written for 2.0.0. This is what we are doing in patch 06 in hadoop-yarn-server-timelineservice-hbase-server-2 pom.xml as an attempt {code:java} default true org.apache.maven.plugins maven-compiler-plugin default-compile none org.apache.maven.plugins maven-javadoc-plugin default-javadoc none org.codehaus.mojo findbugs-maven-plugin default-findbugs none {code} Binding maven-compiler-plugin to phase 'none' is working, but not the other two plugins. Hence, the findbugs and javadoc issues. Baffling to me, binding to none phase does not work for all plugins. Any suggestion or things I am missing, [~busbey]? > Fix compilation errors against hbase2 beta release > -- > > Key: YARN-7346 > URL: https://issues.apache.org/jira/browse/YARN-7346 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Ted Yu >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-7346.00.patch, YARN-7346.01.patch, > YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, > YARN-7346.04-incremental.patch, YARN-7346.04.patch, YARN-7346.05.patch, > YARN-7346.06.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, > YARN-7581.prelim.patch > > > When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, > I got the following errors: > https://pastebin.com/Ms4jYEVB > This issue is to fix the compilation errors. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release
[ https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374381#comment-16374381 ] Sean Busbey commented on YARN-7346: --- {quote} Haibo Chen To me, we should retain 05 patch itself and findbugs and java docs are irrelevant errors. As I see that Yetus is trying to find findbug.xml file which doesn't exist. I think we should go ahead and commit 05 patch. {quote} -1 on committing without addressing findbugs. The error given is caused by the findbugs plugin failing to generate output. Either we need to figure out why that happened and fix it or if findbugs can't run in a particular module (like if it doesn't have source code) then we need to disable it with an explanation why. > Fix compilation errors against hbase2 beta release > -- > > Key: YARN-7346 > URL: https://issues.apache.org/jira/browse/YARN-7346 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Ted Yu >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-7346.00.patch, YARN-7346.01.patch, > YARN-7346.02.patch, YARN-7346.03-incremental.patch, YARN-7346.03.patch, > YARN-7346.04-incremental.patch, YARN-7346.04.patch, YARN-7346.05.patch, > YARN-7346.06.patch, YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, > YARN-7581.prelim.patch > > > When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, > I got the following errors: > https://pastebin.com/Ms4jYEVB > This issue is to fix the compilation errors. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7965) NodeAttributeManager add/get API is not working properly
[ https://issues.apache.org/jira/browse/YARN-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374260#comment-16374260 ] genericqa commented on YARN-7965: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 54s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} YARN-3409 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 8s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 3s{color} | {color:green} YARN-3409 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 59s{color} | {color:green} YARN-3409 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} YARN-3409 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 45s{color} | {color:green} YARN-3409 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 36s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 40s{color} | {color:green} YARN-3409 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} YARN-3409 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 55s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 53s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 40s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 16s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 17s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 30s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}159m 35s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector | | | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodeLabels | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7965 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12911693/YARN-7965-YARN-3409.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 91463e79d8ec 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh
[jira] [Commented] (YARN-7856) Validation node attributes in NM
[ https://issues.apache.org/jira/browse/YARN-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374257#comment-16374257 ] genericqa commented on YARN-7856: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} YARN-3409 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 39s{color} | {color:green} YARN-3409 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 28s{color} | {color:green} YARN-3409 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} YARN-3409 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 3s{color} | {color:green} YARN-3409 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 47s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 13s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in YARN-3409 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s{color} | {color:green} YARN-3409 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 22s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 9s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 25s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 94m 14s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7856 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12911701/YARN-7856-YARN-3409.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b447e780053f 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | YARN-3409 / 0c3bf98 |
[jira] [Commented] (YARN-7949) [UI2] ArtifactsId should not be a compulsory field for new service
[ https://issues.apache.org/jira/browse/YARN-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374245#comment-16374245 ] Hudson commented on YARN-7949: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13705 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13705/]) YARN-7949. [UI2] ArtifactsId should not be a compulsory field for new (sunilg: rev d1cd573687fa3466a5ceb9a525141a8c3a8f686f) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/service-component-table.hbs * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-servicedef.js * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/service-component-table.js > [UI2] ArtifactsId should not be a compulsory field for new service > -- > > Key: YARN-7949 > URL: https://issues.apache.org/jira/browse/YARN-7949 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Affects Versions: 3.1.0 >Reporter: Yesha Vora >Assignee: Yesha Vora >Priority: Major > Fix For: 3.1.0, 3.2.0 > > Attachments: YARN-7949.001.patch > > > 1) Click on New Service > 2) Create a component > Create Component page has Artifacts Id as compulsory entry. Few yarn service > example such as sleeper.json does not need to provide artifacts id. > {code:java|title=sleeper.json} > { > "name": "sleeper-service", > "components" : > [ > { > "name": "sleeper", > "number_of_containers": 2, > "launch_command": "sleep 90", > "resource": { > "cpus": 1, > "memory": "256" > } > } > ] > }{code} > Thus, artifactsId should not be compulsory field. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7944) Remove master node link from headers of application pages
[ https://issues.apache.org/jira/browse/YARN-7944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374233#comment-16374233 ] Sunil G commented on YARN-7944: --- [~yeshavora] Are we using masterNodeURL or container log url anywhere else? If not , we can remove from model too. > Remove master node link from headers of application pages > - > > Key: YARN-7944 > URL: https://issues.apache.org/jira/browse/YARN-7944 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Affects Versions: 3.1.0 >Reporter: Yesha Vora >Assignee: Yesha Vora >Priority: Major > Fix For: 3.1.0 > > Attachments: YARN-7944.001.patch > > > Rm UI2 has links for Master container log and master node. > This link published on application and service page. This links are not > required on all pages because AM container node link and container log link > are already present in Application view. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7949) [UI2] ArtifactsId should not be a compulsory field for new service
[ https://issues.apache.org/jira/browse/YARN-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-7949: -- Summary: [UI2] ArtifactsId should not be a compulsory field for new service (was: ArtifactsId should not be a compulsory field for new service) > [UI2] ArtifactsId should not be a compulsory field for new service > -- > > Key: YARN-7949 > URL: https://issues.apache.org/jira/browse/YARN-7949 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Affects Versions: 3.1.0 >Reporter: Yesha Vora >Assignee: Yesha Vora >Priority: Major > Attachments: YARN-7949.001.patch > > > 1) Click on New Service > 2) Create a component > Create Component page has Artifacts Id as compulsory entry. Few yarn service > example such as sleeper.json does not need to provide artifacts id. > {code:java|title=sleeper.json} > { > "name": "sleeper-service", > "components" : > [ > { > "name": "sleeper", > "number_of_containers": 2, > "launch_command": "sleep 90", > "resource": { > "cpus": 1, > "memory": "256" > } > } > ] > }{code} > Thus, artifactsId should not be compulsory field. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7675) [UI2] Support loading pre-2.8 version /scheduler REST response for queue page
[ https://issues.apache.org/jira/browse/YARN-7675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374221#comment-16374221 ] Hudson commented on YARN-7675: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13704 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13704/]) YARN-7675. [UI2] Support loading pre-2.8 version /scheduler REST (sunilg: rev cc683952d2c1730109497aa78dd53629e914d294) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-queue/capacity-queue.js > [UI2] Support loading pre-2.8 version /scheduler REST response for queue page > - > > Key: YARN-7675 > URL: https://issues.apache.org/jira/browse/YARN-7675 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Gergely Novák >Assignee: Gergely Novák >Priority: Major > Fix For: 3.1.0, 3.2.0 > > Attachments: YARN-7675.001.patch > > > If we connect the new YARN UI to any Hadoop versions older than 2.8 it won't > load. The console shows this trace: > {noformat} > TypeError: Cannot read property 'queueCapacitiesByPartition' of undefined > at Class.normalizeSingleResponse (yarn-ui.js:13903) > at Class.superWrapper [as normalizeSingleResponse] (vendor.js:31811) > at Class.handleQueue (yarn-ui.js:13928) > at Class.normalizeArrayResponse (yarn-ui.js:13952) > at Class.normalizeQueryResponse (vendor.js:101566) > at Class.normalizeResponse (vendor.js:101468) > at > ember$data$lib$system$store$serializer$response$$normalizeResponseHelper > (vendor.js:95345) > at vendor.js:95672 > at Backburner.run (vendor.js:10426) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7675) [UI2] Support loading pre-2.8 version /scheduler REST response for queue page
[ https://issues.apache.org/jira/browse/YARN-7675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-7675: -- Summary: [UI2] Support loading pre-2.8 version /scheduler REST response for queue page (was: The new UI won't load for pre 2.8 Hadoop versions because queueCapacitiesByPartition is missing from the scheduler API) > [UI2] Support loading pre-2.8 version /scheduler REST response for queue page > - > > Key: YARN-7675 > URL: https://issues.apache.org/jira/browse/YARN-7675 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Gergely Novák >Assignee: Gergely Novák >Priority: Major > Attachments: YARN-7675.001.patch > > > If we connect the new YARN UI to any Hadoop versions older than 2.8 it won't > load. The console shows this trace: > {noformat} > TypeError: Cannot read property 'queueCapacitiesByPartition' of undefined > at Class.normalizeSingleResponse (yarn-ui.js:13903) > at Class.superWrapper [as normalizeSingleResponse] (vendor.js:31811) > at Class.handleQueue (yarn-ui.js:13928) > at Class.normalizeArrayResponse (yarn-ui.js:13952) > at Class.normalizeQueryResponse (vendor.js:101566) > at Class.normalizeResponse (vendor.js:101468) > at > ember$data$lib$system$store$serializer$response$$normalizeResponseHelper > (vendor.js:95345) > at vendor.js:95672 > at Backburner.run (vendor.js:10426) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7856) Validation node attributes in NM
[ https://issues.apache.org/jira/browse/YARN-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374184#comment-16374184 ] Weiwei Yang commented on YARN-7856: --- Hi [~sunilg] A new patch updated, regarding to your comments {quote}could u pls check a case where one of getAttributePrefix may be null {quote} This is checked in the patch {code} if (Strings.isNullOrEmpty(prefix)) { throw new IOException("Attribute prefix must be set"); } {code} We don't allow any prefix can be null, so immediately fail the check if found one. {quote}we could add some checks to avoid special characters etc {quote} Done. This is by calling \{{NodeLabelUtil#checkAndThrow...}} APIs. {quote}we could validate each of these attribute alone. {quote} The reason I encapsulate the check against a given node attribute set in \{{NodeLabelUtil}} is because this is easier to be reused by other components, clearly RM side will need this when adding centralized attributes. Please help to review the v2 patch and let me know if you have any more comments. Thanks > Validation node attributes in NM > > > Key: YARN-7856 > URL: https://issues.apache.org/jira/browse/YARN-7856 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, RM >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Major > Attachments: YARN-7856-YARN-3409.001.patch, > YARN-7856-YARN-3409.002.patch > > > NM needs to do proper validation about the attributes before sending them to > RM, this includes > # a valid prefix is presented > # no duplicate entries > # do not allow two attributes with same prefix/name but different types > This could be an utility class that can be used on both RM/NM sides. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7856) Validation node attributes in NM
[ https://issues.apache.org/jira/browse/YARN-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated YARN-7856: -- Attachment: YARN-7856-YARN-3409.002.patch > Validation node attributes in NM > > > Key: YARN-7856 > URL: https://issues.apache.org/jira/browse/YARN-7856 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, RM >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Major > Attachments: YARN-7856-YARN-3409.001.patch, > YARN-7856-YARN-3409.002.patch > > > NM needs to do proper validation about the attributes before sending them to > RM, this includes > # a valid prefix is presented > # no duplicate entries > # do not allow two attributes with same prefix/name but different types > This could be an utility class that can be used on both RM/NM sides. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7929) SLS supports setting container execution
[ https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374143#comment-16374143 ] genericqa commented on YARN-7929: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 52s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 13s{color} | {color:orange} hadoop-tools/hadoop-sls: The patch generated 16 new + 51 unchanged - 1 fixed = 67 total (was 52) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 45s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 13s{color} | {color:green} hadoop-sls in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 55m 57s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7929 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12911688/YARN-7929.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6accc44ceeef 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c36b4aa | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/19795/artifact/out/diff-checkstyle-hadoop-tools_hadoop-sls.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/19795/testReport/ | | Max. process+thread count | 456 (vs. ulimit of 1) | | modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/19795/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically
[jira] [Commented] (YARN-7637) GPU volume creation command fails when work preserving is disabled at NM
[ https://issues.apache.org/jira/browse/YARN-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374117#comment-16374117 ] genericqa commented on YARN-7637: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 34s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 34s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 15s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 8s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 6s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 62m 20s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7637 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12911682/YARN-7637.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d4a4720c4642 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c36b4aa | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/19794/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/19794/testReport/ | | Max. process+thread count | 407 (vs. ulimit of 1) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U:
[jira] [Commented] (YARN-7965) NodeAttributeManager add/get API is not working properly
[ https://issues.apache.org/jira/browse/YARN-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374106#comment-16374106 ] Weiwei Yang commented on YARN-7965: --- Hi [~naganarasimha...@apache.org] Please help to review, thanks. > NodeAttributeManager add/get API is not working properly > > > Key: YARN-7965 > URL: https://issues.apache.org/jira/browse/YARN-7965 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Major > Attachments: YARN-7965-YARN-3409.001.patch > > > Fix following issues, > # After add node attributes to the manager, could not retrieve newly added > attributes > # Get cluster attributes API should return empty set when given prefix has > no match > # When an attribute is removed from all nodes, the manager did not remove > this mapping > and add UT -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7965) NodeAttributeManager add/get API is not working properly
[ https://issues.apache.org/jira/browse/YARN-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated YARN-7965: -- Attachment: YARN-7965-YARN-3409.001.patch > NodeAttributeManager add/get API is not working properly > > > Key: YARN-7965 > URL: https://issues.apache.org/jira/browse/YARN-7965 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Major > Attachments: YARN-7965-YARN-3409.001.patch > > > Fix following issues, > # After add node attributes to the manager, could not retrieve newly added > attributes > # Get cluster attributes API should return empty set when given prefix has > no match > # When an attribute is removed from all nodes, the manager did not remove > this mapping > and add UT -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7965) NodeAttributeManager add/get API is not working properly
[ https://issues.apache.org/jira/browse/YARN-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated YARN-7965: -- Description: Fix following issues, # After add node attributes to the manager, could not retrieve newly added attributes # Get cluster attributes API should return empty set when given prefix has no match # When an attribute is removed from all nodes, the manager did not remove this mapping and add UT was: Fix following issues, # After add node attributes to the manager, could not retrieve newly added attributes # Get cluster attributes API should return empty set when given prefix has no match and add UT > NodeAttributeManager add/get API is not working properly > > > Key: YARN-7965 > URL: https://issues.apache.org/jira/browse/YARN-7965 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Major > > Fix following issues, > # After add node attributes to the manager, could not retrieve newly added > attributes > # Get cluster attributes API should return empty set when given prefix has > no match > # When an attribute is removed from all nodes, the manager did not remove > this mapping > and add UT -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7965) NodeAttributeManager add/get API is not working properly
Weiwei Yang created YARN-7965: - Summary: NodeAttributeManager add/get API is not working properly Key: YARN-7965 URL: https://issues.apache.org/jira/browse/YARN-7965 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Reporter: Weiwei Yang Assignee: Weiwei Yang Fix following issues, # After add node attributes to the manager, could not retrieve newly added attributes # Get cluster attributes API should return empty set when given prefix has no match and add UT -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7929) SLS supports setting container execution
[ https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiandan Yang updated YARN-7929: Attachment: YARN-7929.002.patch > SLS supports setting container execution > > > Key: YARN-7929 > URL: https://issues.apache.org/jira/browse/YARN-7929 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Reporter: Jiandan Yang >Assignee: Jiandan Yang >Priority: Major > Attachments: YARN-7929.001.patch, YARN-7929.002.patch > > > SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file > can not set execution type of container. > This jira will introduce execution type in SLS to help better simulation. > This will help the perf testing with regarding to the Opportunistic > Containers. > RUMEN has default execution type GUARANTEED > SYNTH set execution type by field map_execution_type and > reduce_execution_type > SLS set execution type by field container.execution_type > For compatibility set GUARANTEED as default value when not setting above > fields in trace file -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7637) GPU volume creation command fails when work preserving is disabled at NM
[ https://issues.apache.org/jira/browse/YARN-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374071#comment-16374071 ] Sunil G commented on YARN-7637: --- Change looks straight forward to me. +1 Pending jenkins. > GPU volume creation command fails when work preserving is disabled at NM > > > Key: YARN-7637 > URL: https://issues.apache.org/jira/browse/YARN-7637 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Affects Versions: 3.1.0 >Reporter: Sunil G >Assignee: Zian Chen >Priority: Critical > Attachments: YARN-7637.001.patch > > > When work preserving is disabled, NM uses {{NMNullStateStoreService}}. Hence > resource mappings related to GPU wont be saved at Container. > This has to be rechecked and store accordingly. > cc/ [~leftnoteasy] and [~Zian Chen] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7637) GPU volume creation command fails when work preserving is disabled at NM
[ https://issues.apache.org/jira/browse/YARN-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zian Chen updated YARN-7637: Attachment: YARN-7637.001.patch > GPU volume creation command fails when work preserving is disabled at NM > > > Key: YARN-7637 > URL: https://issues.apache.org/jira/browse/YARN-7637 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Affects Versions: 3.1.0 >Reporter: Sunil G >Assignee: Zian Chen >Priority: Critical > Attachments: YARN-7637.001.patch > > > When work preserving is disabled, NM uses {{NMNullStateStoreService}}. Hence > resource mappings related to GPU wont be saved at Container. > This has to be rechecked and store accordingly. > cc/ [~leftnoteasy] and [~Zian Chen] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 beta release
[ https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374063#comment-16374063 ] genericqa commented on YARN-7346: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 17s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-assemblies hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 23s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 54s{color} | {color:orange} root: The patch generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 11s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 11s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-assemblies hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 18s{color} | {color:red} patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-2 no findbugs output file (hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-2/target/findbugsXml.xml) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-server generated 107 new + 0 unchanged - 0 fixed = 107 total (was 0)