[jira] [Commented] (YARN-7633) [Documentation] Add documentation for auto queue creation feature and related configurations

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288844#comment-16288844
 ] 

genericqa commented on YARN-7633:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
26m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 7 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7633 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901836/YARN-7633.5.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux a2412cf8fa69 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7efc4f7 |
| maven | version: Apache Maven 3.3.9 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/18901/artifact/out/whitespace-eol.txt
 |
| Max. process+thread count | 331 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18901/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [Documentation] Add documentation for auto queue creation feature and related 
> configurations
> 
>
> Key: YARN-7633
> URL: https://issues.apache.org/jira/browse/YARN-7633
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7633.1.patch, YARN-7633.2.patch, YARN-7633.3.patch, 
> YARN-7633.4.patch, YARN-7633.5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7633) [Documentation] Add documentation for auto queue creation feature and related configurations

2017-12-12 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7633:
---
Attachment: YARN-7633.5.patch

Fixed some formatting issues

> [Documentation] Add documentation for auto queue creation feature and related 
> configurations
> 
>
> Key: YARN-7633
> URL: https://issues.apache.org/jira/browse/YARN-7633
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7633.1.patch, YARN-7633.2.patch, YARN-7633.3.patch, 
> YARN-7633.4.patch, YARN-7633.5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7633) [Documentation] Add documentation for auto queue creation feature and related configurations

2017-12-12 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288813#comment-16288813
 ] 

Suma Shivaprasad edited comment on YARN-7633 at 12/13/17 7:08 AM:
--

Thanks for the detailed review [~sunilg]. Attached patch with review comments 
addressed. 


was (Author: suma.shivaprasad):
Thanks for the detailed review @Sunil G. Attached patch with review comments 
addressed. 

> [Documentation] Add documentation for auto queue creation feature and related 
> configurations
> 
>
> Key: YARN-7633
> URL: https://issues.apache.org/jira/browse/YARN-7633
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7633.1.patch, YARN-7633.2.patch, YARN-7633.3.patch, 
> YARN-7633.4.patch, YARN-7633.5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7633) [Documentation] Add documentation for auto queue creation feature and related configurations

2017-12-12 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7633:
---
Attachment: YARN-7633.4.patch

Thanks for the detailed review @Sunil G. Attached patch with review comments 
addressed. 

> [Documentation] Add documentation for auto queue creation feature and related 
> configurations
> 
>
> Key: YARN-7633
> URL: https://issues.apache.org/jira/browse/YARN-7633
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7633.1.patch, YARN-7633.2.patch, YARN-7633.3.patch, 
> YARN-7633.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7643) Handle recovery of applications on auto-created leaf queues

2017-12-12 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288758#comment-16288758
 ] 

Sunil G edited comment on YARN-7643 at 12/13/17 6:47 AM:
-

Thanks [~suma.shivaprasad]. Some comments here.
1 here
{code}
  void replaceQueueFromPlacementContext(
  ApplicationPlacementContext placementContext,
  ApplicationSubmissionContext context) {
// Set it to ApplicationSubmissionContext
//apply queue mapping only to new application submissions
if (placementContext != null && !StringUtils.equalsIgnoreCase(
context.getQueue(), placementContext.getQueue())) {
  LOG.info("Placed application=" + context.getApplicationId() +
  " to queue=" + placementContext.getQueue() + ", original queue="
  + context
  .getQueue());
  context.setQueue(placementContext.getQueue());
}
  }
{code}
Queue after placement is already updated in submission context during 
application submission. So while recovery, we already have the mapped queue 
name. Hence {{UserGroupMappingPlacementRule.getPlacementForApp}} will have 
correct mapped queue name, but still we redo same action. Ideally the current 
issue has happened because below event has to be fired from RMAppImpl to 
Scheduler and *placementContext* will be null in current case of recovery (this 
might break for normal user-mapping also?).
{code}
  app.scheduler.handle(
  new AppAddedSchedulerEvent(app.user, app.submissionContext, true,
  app.applicationPriority, app.placementContext));
{code}
Couple of suggestions:
a. Could we save *placementContext* under app data in statestore?
b. While recomputing *placeApplication*, could we bypass some api calls from 
{{PlacementManager}} as we already have the mapped queue name?


2 Could we optimize {{addApplicationOnRecovery}} in CS further? Multiple if 
checks are a bit confusing. May be we can create {{getQueueWithMappings}} and 
instead of calling getQueue from addApplication/OnRecovery, we can getQueue and 
do mapping if needed. A bit if refactoring only.



was (Author: sunilg):
Thanks [~suma.shivaprasad]. Some comments here.
#
{code}
  void replaceQueueFromPlacementContext(
  ApplicationPlacementContext placementContext,
  ApplicationSubmissionContext context) {
// Set it to ApplicationSubmissionContext
//apply queue mapping only to new application submissions
if (placementContext != null && !StringUtils.equalsIgnoreCase(
context.getQueue(), placementContext.getQueue())) {
  LOG.info("Placed application=" + context.getApplicationId() +
  " to queue=" + placementContext.getQueue() + ", original queue="
  + context
  .getQueue());
  context.setQueue(placementContext.getQueue());
}
  }
{code}
Queue after placement is already updated in submission context during 
application submission. So while recovery, we already have the mapped queue 
name. Hence {{UserGroupMappingPlacementRule.getPlacementForApp}} will have 
correct mapped queue name, but still we redo same action. Ideally the current 
issue has happened because below event has to be fired from RMAppImpl to 
Scheduler and *placementContext* will be null in current case of recovery (this 
might break for normal user-mapping also?).
{code}
  app.scheduler.handle(
  new AppAddedSchedulerEvent(app.user, app.submissionContext, true,
  app.applicationPriority, app.placementContext));
{code}
Couple of suggestions:
1. Could we save *placementContext* under app data in statestore?
2. While recomputing *placeApplication*, could we bypass some api calls from 
{{PlacementManager}} as we already have the mapped queue name?

# Could we optimize {{addApplicationOnRecovery}} in CS further? Multiple if 
checks are a bit confusing. May be we can create {{getQueueWithMappings}} and 
instead of calling getQueue from addApplication/OnRecovery, we can getQueue and 
do mapping if needed. A bit if refactoring only.


> Handle recovery of applications on auto-created leaf queues
> ---
>
> Key: YARN-7643
> URL: https://issues.apache.org/jira/browse/YARN-7643
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7643.1.patch, YARN-7643.2.patch
>
>
> CapacityScheduler application recovery should auto-create leaf queue if it 
> doesnt exist. Also RMAppManager needs to set the queue-mapping placement 
> context so that scheduler has necessary placement context to recreate the 
> queue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr.

[jira] [Commented] (YARN-7536) em-table filter UX issues

2017-12-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288774#comment-16288774
 ] 

ASF GitHub Bot commented on YARN-7536:
--

Github user skmvasu commented on the issue:

https://github.com/apache/hadoop/pull/308
  
Jenkins applies the patch commit by commit and not able to recover from the 
conflicts. Closing this in favour of https://github.com/apache/hadoop/pull/313




> em-table filter UX issues
> -
>
> Key: YARN-7536
> URL: https://issues.apache.org/jira/browse/YARN-7536
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Vasudevan Skm
>Assignee: Vasudevan Skm
>Priority: Minor
>  Labels: yarn-ui
>
> When the filters are rendered in YARN-ui there are some UI issues
> 1) The filters are not expanded by default
> 2) Filter section is empty even when there are 2 items to filter



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7536) em-table filter UX issues

2017-12-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288773#comment-16288773
 ] 

ASF GitHub Bot commented on YARN-7536:
--

GitHub user skmvasu opened a pull request:

https://github.com/apache/hadoop/pull/313

YARN-7536. Em table fix

Fixes UX issue with em-table
- expands filter by default
- sets minitems to filters to 1

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/skmvasu/hadoop em_ux_fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/313.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #313


commit 3a3cd6ad12f25ebe865addd39f782f7042508791
Author: Vasu 
Date:   2017-11-20T09:10:08Z

Split em-table style overrides to a different file

commit f5d5f446bf4d251415021d58865d70585e74fbb1
Author: Vasu 
Date:   2017-11-20T09:12:36Z

expand em-table accordian by default

commit 4e0f135d2f1ce678afaf22abb579a1c28eb6b5d9
Author: Vasu 
Date:   2017-11-20T09:16:31Z

render filters for min 1 item

commit fc8c3947ca9ba15d0b51cb6d6c30093afc1f3c29
Author: Vasu 
Date:   2017-12-01T06:04:38Z

fixes em table styles




> em-table filter UX issues
> -
>
> Key: YARN-7536
> URL: https://issues.apache.org/jira/browse/YARN-7536
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Vasudevan Skm
>Assignee: Vasudevan Skm
>Priority: Minor
>  Labels: yarn-ui
>
> When the filters are rendered in YARN-ui there are some UI issues
> 1) The filters are not expanded by default
> 2) Filter section is empty even when there are 2 items to filter



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7536) em-table filter UX issues

2017-12-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288775#comment-16288775
 ] 

ASF GitHub Bot commented on YARN-7536:
--

Github user skmvasu closed the pull request at:

https://github.com/apache/hadoop/pull/308


> em-table filter UX issues
> -
>
> Key: YARN-7536
> URL: https://issues.apache.org/jira/browse/YARN-7536
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Vasudevan Skm
>Assignee: Vasudevan Skm
>Priority: Minor
>  Labels: yarn-ui
>
> When the filters are rendered in YARN-ui there are some UI issues
> 1) The filters are not expanded by default
> 2) Filter section is empty even when there are 2 items to filter



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7643) Handle recovery of applications on auto-created leaf queues

2017-12-12 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288758#comment-16288758
 ] 

Sunil G commented on YARN-7643:
---

Thanks [~suma.shivaprasad]. Some comments here.
#
{code}
  void replaceQueueFromPlacementContext(
  ApplicationPlacementContext placementContext,
  ApplicationSubmissionContext context) {
// Set it to ApplicationSubmissionContext
//apply queue mapping only to new application submissions
if (placementContext != null && !StringUtils.equalsIgnoreCase(
context.getQueue(), placementContext.getQueue())) {
  LOG.info("Placed application=" + context.getApplicationId() +
  " to queue=" + placementContext.getQueue() + ", original queue="
  + context
  .getQueue());
  context.setQueue(placementContext.getQueue());
}
  }
{code}
Queue after placement is already updated in submission context during 
application submission. So while recovery, we already have the mapped queue 
name. Hence {{UserGroupMappingPlacementRule.getPlacementForApp}} will have 
correct mapped queue name, but still we redo same action. Ideally the current 
issue has happened because below event has to be fired from RMAppImpl to 
Scheduler and *placementContext* will be null in current case of recovery (this 
might break for normal user-mapping also?).
{code}
  app.scheduler.handle(
  new AppAddedSchedulerEvent(app.user, app.submissionContext, true,
  app.applicationPriority, app.placementContext));
{code}
Couple of suggestions:
1. Could we save *placementContext* under app data in statestore?
2. While recomputing *placeApplication*, could we bypass some api calls from 
{{PlacementManager}} as we already have the mapped queue name?

# Could we optimize {{addApplicationOnRecovery}} in CS further? Multiple if 
checks are a bit confusing. May be we can create {{getQueueWithMappings}} and 
instead of calling getQueue from addApplication/OnRecovery, we can getQueue and 
do mapping if needed. A bit if refactoring only.


> Handle recovery of applications on auto-created leaf queues
> ---
>
> Key: YARN-7643
> URL: https://issues.apache.org/jira/browse/YARN-7643
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7643.1.patch, YARN-7643.2.patch
>
>
> CapacityScheduler application recovery should auto-create leaf queue if it 
> doesnt exist. Also RMAppManager needs to set the queue-mapping placement 
> context so that scheduler has necessary placement context to recreate the 
> queue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7647) NM print inappropriate error log when node-labels is enabled

2017-12-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7647:
--
Fix Version/s: 2.10.0

> NM print inappropriate error log when node-labels is enabled
> 
>
> Key: YARN-7647
> URL: https://issues.apache.org/jira/browse/YARN-7647
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Minor
> Fix For: 2.8.2, 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: YARN-7647.001.patch
>
>
> {code:title=NodeStatusUpdaterImpl.java}
>   ... ...
>   if (response.getAreNodeLabelsAcceptedByRM() && LOG.isDebugEnabled()) {
>   LOG.debug("Node Labels {" + StringUtils.join(",", 
> previousNodeLabels)
>   + "} were Accepted by RM ");
> } else {
>   // case where updated labels from NodeLabelsProvider is sent to RM 
> and
>   // RM rejected the labels
>   LOG.error(
>   "NM node labels {" + StringUtils.join(",", previousNodeLabels)
>   + "} were not accepted by RM and message from RM : "
>   + response.getDiagnosticsMessage());
> }
>   ... ...
> {code}
> When LOG.isDebugEnabled() is false, NM will always print error log. It is an 
> obvious error and is so misleading.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7647) NM print inappropriate error log when node-labels is enabled

2017-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288741#comment-16288741
 ] 

Hudson commented on YARN-7647:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13364 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13364/])
YARN-7647. NM print inappropriate error log when node-labels is enabled. (wwei: 
rev 7efc4f76885348730728c0201dd0d1a89b213e9c)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java


> NM print inappropriate error log when node-labels is enabled
> 
>
> Key: YARN-7647
> URL: https://issues.apache.org/jira/browse/YARN-7647
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Minor
> Fix For: 2.8.2, 3.1.0, 2.9.1, 3.0.1
>
> Attachments: YARN-7647.001.patch
>
>
> {code:title=NodeStatusUpdaterImpl.java}
>   ... ...
>   if (response.getAreNodeLabelsAcceptedByRM() && LOG.isDebugEnabled()) {
>   LOG.debug("Node Labels {" + StringUtils.join(",", 
> previousNodeLabels)
>   + "} were Accepted by RM ");
> } else {
>   // case where updated labels from NodeLabelsProvider is sent to RM 
> and
>   // RM rejected the labels
>   LOG.error(
>   "NM node labels {" + StringUtils.join(",", previousNodeLabels)
>   + "} were not accepted by RM and message from RM : "
>   + response.getDiagnosticsMessage());
> }
>   ... ...
> {code}
> When LOG.isDebugEnabled() is false, NM will always print error log. It is an 
> obvious error and is so misleading.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7647) NM print inappropriate error log when node-labels is enabled

2017-12-12 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288732#comment-16288732
 ] 

Weiwei Yang edited comment on YARN-7647 at 12/13/17 5:29 AM:
-

Committed the patch to branch-2, branch-2.8, branch-2.9, branch-3.0 and trunk. 
Thanks [~fly_in_gis] for the contribution.


was (Author: cheersyang):
Committed the patch to branch-2.9, branch-3.0 and trunk. Thanks [~fly_in_gis] 
for the contribution.

> NM print inappropriate error log when node-labels is enabled
> 
>
> Key: YARN-7647
> URL: https://issues.apache.org/jira/browse/YARN-7647
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Minor
> Fix For: 2.8.2, 3.1.0, 2.9.1, 3.0.1
>
> Attachments: YARN-7647.001.patch
>
>
> {code:title=NodeStatusUpdaterImpl.java}
>   ... ...
>   if (response.getAreNodeLabelsAcceptedByRM() && LOG.isDebugEnabled()) {
>   LOG.debug("Node Labels {" + StringUtils.join(",", 
> previousNodeLabels)
>   + "} were Accepted by RM ");
> } else {
>   // case where updated labels from NodeLabelsProvider is sent to RM 
> and
>   // RM rejected the labels
>   LOG.error(
>   "NM node labels {" + StringUtils.join(",", previousNodeLabels)
>   + "} were not accepted by RM and message from RM : "
>   + response.getDiagnosticsMessage());
> }
>   ... ...
> {code}
> When LOG.isDebugEnabled() is false, NM will always print error log. It is an 
> obvious error and is so misleading.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7647) NM print inappropriate error log when node-labels is enabled

2017-12-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7647:
--
Fix Version/s: 2.8.2

> NM print inappropriate error log when node-labels is enabled
> 
>
> Key: YARN-7647
> URL: https://issues.apache.org/jira/browse/YARN-7647
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Minor
> Fix For: 2.8.2, 3.1.0, 2.9.1, 3.0.1
>
> Attachments: YARN-7647.001.patch
>
>
> {code:title=NodeStatusUpdaterImpl.java}
>   ... ...
>   if (response.getAreNodeLabelsAcceptedByRM() && LOG.isDebugEnabled()) {
>   LOG.debug("Node Labels {" + StringUtils.join(",", 
> previousNodeLabels)
>   + "} were Accepted by RM ");
> } else {
>   // case where updated labels from NodeLabelsProvider is sent to RM 
> and
>   // RM rejected the labels
>   LOG.error(
>   "NM node labels {" + StringUtils.join(",", previousNodeLabels)
>   + "} were not accepted by RM and message from RM : "
>   + response.getDiagnosticsMessage());
> }
>   ... ...
> {code}
> When LOG.isDebugEnabled() is false, NM will always print error log. It is an 
> obvious error and is so misleading.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7647) NM print inappropriate error log when node-labels is enabled

2017-12-12 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288732#comment-16288732
 ] 

Weiwei Yang commented on YARN-7647:
---

Committed the patch to branch-2.9, branch-3.0 and trunk. Thanks [~fly_in_gis] 
for the contribution.

> NM print inappropriate error log when node-labels is enabled
> 
>
> Key: YARN-7647
> URL: https://issues.apache.org/jira/browse/YARN-7647
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Minor
> Fix For: 3.1.0, 2.9.1, 3.0.1
>
> Attachments: YARN-7647.001.patch
>
>
> {code:title=NodeStatusUpdaterImpl.java}
>   ... ...
>   if (response.getAreNodeLabelsAcceptedByRM() && LOG.isDebugEnabled()) {
>   LOG.debug("Node Labels {" + StringUtils.join(",", 
> previousNodeLabels)
>   + "} were Accepted by RM ");
> } else {
>   // case where updated labels from NodeLabelsProvider is sent to RM 
> and
>   // RM rejected the labels
>   LOG.error(
>   "NM node labels {" + StringUtils.join(",", previousNodeLabels)
>   + "} were not accepted by RM and message from RM : "
>   + response.getDiagnosticsMessage());
> }
>   ... ...
> {code}
> When LOG.isDebugEnabled() is false, NM will always print error log. It is an 
> obvious error and is so misleading.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7647) NM print inappropriate error log when node-labels is enabled

2017-12-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7647:
--
Fix Version/s: 3.0.1

> NM print inappropriate error log when node-labels is enabled
> 
>
> Key: YARN-7647
> URL: https://issues.apache.org/jira/browse/YARN-7647
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Minor
> Fix For: 3.1.0, 2.9.1, 3.0.1
>
> Attachments: YARN-7647.001.patch
>
>
> {code:title=NodeStatusUpdaterImpl.java}
>   ... ...
>   if (response.getAreNodeLabelsAcceptedByRM() && LOG.isDebugEnabled()) {
>   LOG.debug("Node Labels {" + StringUtils.join(",", 
> previousNodeLabels)
>   + "} were Accepted by RM ");
> } else {
>   // case where updated labels from NodeLabelsProvider is sent to RM 
> and
>   // RM rejected the labels
>   LOG.error(
>   "NM node labels {" + StringUtils.join(",", previousNodeLabels)
>   + "} were not accepted by RM and message from RM : "
>   + response.getDiagnosticsMessage());
> }
>   ... ...
> {code}
> When LOG.isDebugEnabled() is false, NM will always print error log. It is an 
> obvious error and is so misleading.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7647) NM print inappropriate error log when node-labels is enabled

2017-12-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7647:
--
Fix Version/s: 2.9.1

> NM print inappropriate error log when node-labels is enabled
> 
>
> Key: YARN-7647
> URL: https://issues.apache.org/jira/browse/YARN-7647
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Minor
> Fix For: 3.1.0, 2.9.1
>
> Attachments: YARN-7647.001.patch
>
>
> {code:title=NodeStatusUpdaterImpl.java}
>   ... ...
>   if (response.getAreNodeLabelsAcceptedByRM() && LOG.isDebugEnabled()) {
>   LOG.debug("Node Labels {" + StringUtils.join(",", 
> previousNodeLabels)
>   + "} were Accepted by RM ");
> } else {
>   // case where updated labels from NodeLabelsProvider is sent to RM 
> and
>   // RM rejected the labels
>   LOG.error(
>   "NM node labels {" + StringUtils.join(",", previousNodeLabels)
>   + "} were not accepted by RM and message from RM : "
>   + response.getDiagnosticsMessage());
> }
>   ... ...
> {code}
> When LOG.isDebugEnabled() is false, NM will always print error log. It is an 
> obvious error and is so misleading.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7633) [Documentation] Add documentation for auto queue creation feature and related configurations

2017-12-12 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288729#comment-16288729
 ] 

Sunil G commented on YARN-7633:
---

Thanks [~suma.shivaprasad]

Few comments
# {{yarn.scheduler.capacity.queue-mappings}} is an already existing property 
and we were configuring something like this till now for user-queue mapping. 
{{u:maria:engineering,g:webadmins:weblog}}. Post auto queue creation support, 
we can also have an added option like {{u:%user:parent1.%user}}. So it might 
look like {{u:maria:engineering,g:webadmins:weblog,u:%user:parent1.%user}}.
Hence newly added section of {{Queue Mapping based on User or Group}} section 
might confuse user as we have same section already present in under {{Queue 
Properties}}. So we could just say the same config from previous section could 
be reused and this additional params could be added to support auto-queue 
creation
# Under {{Features}} section, we have an entry for {{Queue Mapping based on 
User or Group}}. Also mention about auto queue creation there.
# {{yarn.resourcemanager.scheduler.monitor.policies}} is already mentioned 
under {{Capacity Scheduler container preemption}}. We could refer to that 
config instead of restating.
# A potential set of params for {{leaf-queue-template}} could be given an 
example.

> [Documentation] Add documentation for auto queue creation feature and related 
> configurations
> 
>
> Key: YARN-7633
> URL: https://issues.apache.org/jira/browse/YARN-7633
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7633.1.patch, YARN-7633.2.patch, YARN-7633.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7647) NM print inappropriate error log when node-labels is enabled

2017-12-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7647:
--
Target Version/s: 2.8.2, 3.1.0, 2.9.1, 3.0.1  (was: 2.8.2, 3.1.0, 3.0.1)

> NM print inappropriate error log when node-labels is enabled
> 
>
> Key: YARN-7647
> URL: https://issues.apache.org/jira/browse/YARN-7647
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: YARN-7647.001.patch
>
>
> {code:title=NodeStatusUpdaterImpl.java}
>   ... ...
>   if (response.getAreNodeLabelsAcceptedByRM() && LOG.isDebugEnabled()) {
>   LOG.debug("Node Labels {" + StringUtils.join(",", 
> previousNodeLabels)
>   + "} were Accepted by RM ");
> } else {
>   // case where updated labels from NodeLabelsProvider is sent to RM 
> and
>   // RM rejected the labels
>   LOG.error(
>   "NM node labels {" + StringUtils.join(",", previousNodeLabels)
>   + "} were not accepted by RM and message from RM : "
>   + response.getDiagnosticsMessage());
> }
>   ... ...
> {code}
> When LOG.isDebugEnabled() is false, NM will always print error log. It is an 
> obvious error and is so misleading.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7647) NM print inappropriate error log when node-labels is enabled

2017-12-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7647:
--
Fix Version/s: 3.1.0

> NM print inappropriate error log when node-labels is enabled
> 
>
> Key: YARN-7647
> URL: https://issues.apache.org/jira/browse/YARN-7647
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: YARN-7647.001.patch
>
>
> {code:title=NodeStatusUpdaterImpl.java}
>   ... ...
>   if (response.getAreNodeLabelsAcceptedByRM() && LOG.isDebugEnabled()) {
>   LOG.debug("Node Labels {" + StringUtils.join(",", 
> previousNodeLabels)
>   + "} were Accepted by RM ");
> } else {
>   // case where updated labels from NodeLabelsProvider is sent to RM 
> and
>   // RM rejected the labels
>   LOG.error(
>   "NM node labels {" + StringUtils.join(",", previousNodeLabels)
>   + "} were not accepted by RM and message from RM : "
>   + response.getDiagnosticsMessage());
> }
>   ... ...
> {code}
> When LOG.isDebugEnabled() is false, NM will always print error log. It is an 
> obvious error and is so misleading.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7647) NM print inappropriate error log when node-labels is enabled

2017-12-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7647:
--
Priority: Minor  (was: Major)

> NM print inappropriate error log when node-labels is enabled
> 
>
> Key: YARN-7647
> URL: https://issues.apache.org/jira/browse/YARN-7647
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Minor
> Attachments: YARN-7647.001.patch
>
>
> {code:title=NodeStatusUpdaterImpl.java}
>   ... ...
>   if (response.getAreNodeLabelsAcceptedByRM() && LOG.isDebugEnabled()) {
>   LOG.debug("Node Labels {" + StringUtils.join(",", 
> previousNodeLabels)
>   + "} were Accepted by RM ");
> } else {
>   // case where updated labels from NodeLabelsProvider is sent to RM 
> and
>   // RM rejected the labels
>   LOG.error(
>   "NM node labels {" + StringUtils.join(",", previousNodeLabels)
>   + "} were not accepted by RM and message from RM : "
>   + response.getDiagnosticsMessage());
> }
>   ... ...
> {code}
> When LOG.isDebugEnabled() is false, NM will always print error log. It is an 
> obvious error and is so misleading.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7555) Support multiple resource types in YARN native services

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288712#comment-16288712
 ] 

genericqa commented on YARN-7555:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 59s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 7 new + 49 unchanged - 0 fixed = 56 total (was 49) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} xml {color} | {color:red}  0m  2s{color} | 
{color:red} The patch has 1 ill-formed XML file(s). {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
42s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 21s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
53s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:gree

[jira] [Commented] (YARN-7647) NM print inappropriate error log when node-labels is enabled

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288709#comment-16288709
 ] 

genericqa commented on YARN-7647:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 33s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7647 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901806/YARN-7647.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ba745ffd2a3f 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2abab1d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18899/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18899/testReport/ |
| Max. process+thread count | 343 (vs. ulimit of 5000) |
| modules | C

[jira] [Commented] (YARN-7643) Handle recovery of applications on auto-created leaf queues

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288704#comment-16288704
 ] 

genericqa commented on YARN-7643:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 12 new + 251 unchanged - 0 fixed = 263 total (was 251) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 35s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7643 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901804/YARN-7643.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e012e6fe77bd 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2abab1d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18897/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache

[jira] [Commented] (YARN-7633) [Documentation] Add documentation for auto queue creation feature and related configurations

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288695#comment-16288695
 ] 

genericqa commented on YARN-7633:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
26m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7633 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901807/YARN-7633.3.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux ae8a5dd843fa 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2abab1d |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 311 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18900/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [Documentation] Add documentation for auto queue creation feature and related 
> configurations
> 
>
> Key: YARN-7633
> URL: https://issues.apache.org/jira/browse/YARN-7633
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7633.1.patch, YARN-7633.2.patch, YARN-7633.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7577) Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288680#comment-16288680
 ] 

genericqa commented on YARN-7577:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 36 unchanged - 4 fixed = 36 total (was 40) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 49s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m  5s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector 
|
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7577 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901799/YARN-7577.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b2b7b2b5ed58 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2abab1d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18896/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YAR

[jira] [Commented] (YARN-7633) [Documentation] Add documentation for auto queue creation feature and related configurations

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288679#comment-16288679
 ] 

genericqa commented on YARN-7633:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
50s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
28m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7633 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901805/YARN-7633.2.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux aa617257e988 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2abab1d |
| maven | version: Apache Maven 3.3.9 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/18898/artifact/out/whitespace-eol.txt
 |
| Max. process+thread count | 330 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18898/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [Documentation] Add documentation for auto queue creation feature and related 
> configurations
> 
>
> Key: YARN-7633
> URL: https://issues.apache.org/jira/browse/YARN-7633
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7633.1.patch, YARN-7633.2.patch, YARN-7633.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7647) NM print inappropriate error log when node-labels is enabled

2017-12-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7647:
--
Affects Version/s: 2.8.0
   3.0.0-alpha1

> NM print inappropriate error log when node-labels is enabled
> 
>
> Key: YARN-7647
> URL: https://issues.apache.org/jira/browse/YARN-7647
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Yang Wang
>Assignee: Yang Wang
> Attachments: YARN-7647.001.patch
>
>
> {code:title=NodeStatusUpdaterImpl.java}
>   ... ...
>   if (response.getAreNodeLabelsAcceptedByRM() && LOG.isDebugEnabled()) {
>   LOG.debug("Node Labels {" + StringUtils.join(",", 
> previousNodeLabels)
>   + "} were Accepted by RM ");
> } else {
>   // case where updated labels from NodeLabelsProvider is sent to RM 
> and
>   // RM rejected the labels
>   LOG.error(
>   "NM node labels {" + StringUtils.join(",", previousNodeLabels)
>   + "} were not accepted by RM and message from RM : "
>   + response.getDiagnosticsMessage());
> }
>   ... ...
> {code}
> When LOG.isDebugEnabled() is false, NM will always print error log. It is an 
> obvious error and is so misleading.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7647) NM print inappropriate error log when node-labels is enabled

2017-12-12 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7647:
--
Target Version/s: 2.8.2, 3.1.0, 3.0.1

> NM print inappropriate error log when node-labels is enabled
> 
>
> Key: YARN-7647
> URL: https://issues.apache.org/jira/browse/YARN-7647
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Yang Wang
>Assignee: Yang Wang
> Attachments: YARN-7647.001.patch
>
>
> {code:title=NodeStatusUpdaterImpl.java}
>   ... ...
>   if (response.getAreNodeLabelsAcceptedByRM() && LOG.isDebugEnabled()) {
>   LOG.debug("Node Labels {" + StringUtils.join(",", 
> previousNodeLabels)
>   + "} were Accepted by RM ");
> } else {
>   // case where updated labels from NodeLabelsProvider is sent to RM 
> and
>   // RM rejected the labels
>   LOG.error(
>   "NM node labels {" + StringUtils.join(",", previousNodeLabels)
>   + "} were not accepted by RM and message from RM : "
>   + response.getDiagnosticsMessage());
> }
>   ... ...
> {code}
> When LOG.isDebugEnabled() is false, NM will always print error log. It is an 
> obvious error and is so misleading.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7647) NM print inappropriate error log when node-labels is enabled

2017-12-12 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288676#comment-16288676
 ] 

Weiwei Yang commented on YARN-7647:
---

Very straightforward fix, thanks [~fly_in_gis], +1, pending on jenkins.

> NM print inappropriate error log when node-labels is enabled
> 
>
> Key: YARN-7647
> URL: https://issues.apache.org/jira/browse/YARN-7647
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yang Wang
>Assignee: Yang Wang
> Attachments: YARN-7647.001.patch
>
>
> {code:title=NodeStatusUpdaterImpl.java}
>   ... ...
>   if (response.getAreNodeLabelsAcceptedByRM() && LOG.isDebugEnabled()) {
>   LOG.debug("Node Labels {" + StringUtils.join(",", 
> previousNodeLabels)
>   + "} were Accepted by RM ");
> } else {
>   // case where updated labels from NodeLabelsProvider is sent to RM 
> and
>   // RM rejected the labels
>   LOG.error(
>   "NM node labels {" + StringUtils.join(",", previousNodeLabels)
>   + "} were not accepted by RM and message from RM : "
>   + response.getDiagnosticsMessage());
> }
>   ... ...
> {code}
> When LOG.isDebugEnabled() is false, NM will always print error log. It is an 
> obvious error and is so misleading.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7612) Add Placement Processor and planner framework

2017-12-12 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288649#comment-16288649
 ] 

Arun Suresh edited comment on YARN-7612 at 12/13/17 3:45 AM:
-

[~leftnoteasy], makes sense - I can update the packages.
The BatchedRequests and NodeCandidateSelector is part of the impl though not 
sample.


was (Author: asuresh):
[~leftnoteasy], makes sense - I can update the packages

> Add Placement Processor and planner framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7633) [Documentation] Add documentation for auto queue creation feature and related configurations

2017-12-12 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288672#comment-16288672
 ] 

Suma Shivaprasad commented on YARN-7633:


Fixed whitespace issues

> [Documentation] Add documentation for auto queue creation feature and related 
> configurations
> 
>
> Key: YARN-7633
> URL: https://issues.apache.org/jira/browse/YARN-7633
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7633.1.patch, YARN-7633.2.patch, YARN-7633.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7633) [Documentation] Add documentation for auto queue creation feature and related configurations

2017-12-12 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7633:
---
Attachment: YARN-7633.3.patch

> [Documentation] Add documentation for auto queue creation feature and related 
> configurations
> 
>
> Key: YARN-7633
> URL: https://issues.apache.org/jira/browse/YARN-7633
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7633.1.patch, YARN-7633.2.patch, YARN-7633.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7647) NM print inappropriate error log when node-labels is enabled

2017-12-12 Thread Yang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated YARN-7647:

Attachment: YARN-7647.001.patch

> NM print inappropriate error log when node-labels is enabled
> 
>
> Key: YARN-7647
> URL: https://issues.apache.org/jira/browse/YARN-7647
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yang Wang
>Assignee: Yang Wang
> Attachments: YARN-7647.001.patch
>
>
> {code:title=NodeStatusUpdaterImpl.java}
>   ... ...
>   if (response.getAreNodeLabelsAcceptedByRM() && LOG.isDebugEnabled()) {
>   LOG.debug("Node Labels {" + StringUtils.join(",", 
> previousNodeLabels)
>   + "} were Accepted by RM ");
> } else {
>   // case where updated labels from NodeLabelsProvider is sent to RM 
> and
>   // RM rejected the labels
>   LOG.error(
>   "NM node labels {" + StringUtils.join(",", previousNodeLabels)
>   + "} were not accepted by RM and message from RM : "
>   + response.getDiagnosticsMessage());
> }
>   ... ...
> {code}
> When LOG.isDebugEnabled() is false, NM will always print error log. It is an 
> obvious error and is so misleading.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7647) NM print inappropriate error log when node-labels is enabled

2017-12-12 Thread Yang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang reassigned YARN-7647:
---

Assignee: Yang Wang

> NM print inappropriate error log when node-labels is enabled
> 
>
> Key: YARN-7647
> URL: https://issues.apache.org/jira/browse/YARN-7647
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yang Wang
>Assignee: Yang Wang
> Attachments: YARN-7647.001.patch
>
>
> {code:title=NodeStatusUpdaterImpl.java}
>   ... ...
>   if (response.getAreNodeLabelsAcceptedByRM() && LOG.isDebugEnabled()) {
>   LOG.debug("Node Labels {" + StringUtils.join(",", 
> previousNodeLabels)
>   + "} were Accepted by RM ");
> } else {
>   // case where updated labels from NodeLabelsProvider is sent to RM 
> and
>   // RM rejected the labels
>   LOG.error(
>   "NM node labels {" + StringUtils.join(",", previousNodeLabels)
>   + "} were not accepted by RM and message from RM : "
>   + response.getDiagnosticsMessage());
> }
>   ... ...
> {code}
> When LOG.isDebugEnabled() is false, NM will always print error log. It is an 
> obvious error and is so misleading.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7619) Max AM Resource value in CS UI is different for every user

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288667#comment-16288667
 ] 

genericqa commented on YARN-7619:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 102 unchanged - 1 fixed = 102 total (was 103) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 16s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7619 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901790/YARN-7619.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 31979ef5fa6a 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2abab1d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18895/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.t

[jira] [Created] (YARN-7647) NM print inappropriate error log when node-labels is enabled

2017-12-12 Thread Yang Wang (JIRA)
Yang Wang created YARN-7647:
---

 Summary: NM print inappropriate error log when node-labels is 
enabled
 Key: YARN-7647
 URL: https://issues.apache.org/jira/browse/YARN-7647
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Yang Wang


{code:title=NodeStatusUpdaterImpl.java}
  ... ...
  if (response.getAreNodeLabelsAcceptedByRM() && LOG.isDebugEnabled()) {
  LOG.debug("Node Labels {" + StringUtils.join(",", previousNodeLabels)
  + "} were Accepted by RM ");
} else {
  // case where updated labels from NodeLabelsProvider is sent to RM and
  // RM rejected the labels
  LOG.error(
  "NM node labels {" + StringUtils.join(",", previousNodeLabels)
  + "} were not accepted by RM and message from RM : "
  + response.getDiagnosticsMessage());
}
  ... ...
{code}

When LOG.isDebugEnabled() is false, NM will always print error log. It is an 
obvious error and is so misleading.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7612) Add Placement Processor and planner framework

2017-12-12 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288649#comment-16288649
 ] 

Arun Suresh commented on YARN-7612:
---

[~leftnoteasy], makes sense - I can update the packages

> Add Placement Processor and planner framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7633) [Documentation] Add documentation for auto queue creation feature and related configurations

2017-12-12 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7633:
---
Attachment: YARN-7633.2.patch

Thanks [~wangda] Attaching patch with review comments fixed

> [Documentation] Add documentation for auto queue creation feature and related 
> configurations
> 
>
> Key: YARN-7633
> URL: https://issues.apache.org/jira/browse/YARN-7633
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7633.1.patch, YARN-7633.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7612) Add Placement Processor and planner framework

2017-12-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288644#comment-16288644
 ] 

Wangda Tan commented on YARN-7612:
--

Thanks [~asuresh]. 

The latest patch looks much better. 

I haven't checked any details yet, quick comments about organizations of code 
packages.

Here's my proposal:

{code}
constraint/ 
api/ (btw what is spi? is it a typo?):
Algorithm
SchedulingProposalCollector
SchedulingRequestHandler
SchedulingResponseHandler
PlacementAlgorithmOutput
PlacedSchedulingRequest
SchedulingResponse

impl/
PlacementDispatcher
PlacementProcessor
SamplePlacementAlgorithm

algorithms/sample/:
BatchedRequests
SamplePlacementAlgorithm
NodeCandidateSelector

:
PlacementConstraintsManager
PlacementConstraintsManagerImpl
AllocationTagsManager
AllocationTagsNamespaces
InvalidAllocationTagsQueryException
{code}

In addition, do you think we should move api and impl packages to 
{{scheduler.constraint}}? To me it is part of scheduler instead of being part 
of "placement". Same for reservation as well.

> Add Placement Processor and planner framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7643) Handle recovery of applications on auto-created leaf queues

2017-12-12 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288639#comment-16288639
 ] 

Suma Shivaprasad edited comment on YARN-7643 at 12/13/17 2:49 AM:
--

Thanks [~wangda] Attaching patch with checkstyle fixes


was (Author: suma.shivaprasad):
Attaching patch with checkstyle fixes

> Handle recovery of applications on auto-created leaf queues
> ---
>
> Key: YARN-7643
> URL: https://issues.apache.org/jira/browse/YARN-7643
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7643.1.patch, YARN-7643.2.patch
>
>
> CapacityScheduler application recovery should auto-create leaf queue if it 
> doesnt exist. Also RMAppManager needs to set the queue-mapping placement 
> context so that scheduler has necessary placement context to recreate the 
> queue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7643) Handle recovery of applications on auto-created leaf queues

2017-12-12 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7643:
---
Attachment: YARN-7643.2.patch

Attaching patch with checkstyle fixes

> Handle recovery of applications on auto-created leaf queues
> ---
>
> Key: YARN-7643
> URL: https://issues.apache.org/jira/browse/YARN-7643
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7643.1.patch, YARN-7643.2.patch
>
>
> CapacityScheduler application recovery should auto-create leaf queue if it 
> doesnt exist. Also RMAppManager needs to set the queue-mapping placement 
> context so that scheduler has necessary placement context to recreate the 
> queue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7643) Handle recovery of applications on auto-created leaf queues

2017-12-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288635#comment-16288635
 ] 

Wangda Tan commented on YARN-7643:
--

[~suma.shivaprasad], I think you're correct for 1). Thanks for the explanation.

> Handle recovery of applications on auto-created leaf queues
> ---
>
> Key: YARN-7643
> URL: https://issues.apache.org/jira/browse/YARN-7643
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7643.1.patch
>
>
> CapacityScheduler application recovery should auto-create leaf queue if it 
> doesnt exist. Also RMAppManager needs to set the queue-mapping placement 
> context so that scheduler has necessary placement context to recreate the 
> queue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7643) Handle recovery of applications on auto-created leaf queues

2017-12-12 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288627#comment-16288627
 ] 

Suma Shivaprasad commented on YARN-7643:


{quote}
1) Why AbstractCSQueue changes required? (And it doesn't look correct for queue 
refresh cases.)
{quote}
Earlier to this fix , the global scheduler configuration for enabling user 
metrics "yarn.scheduler.capacity.user-metrics.enable" was taken from queue 
specific configuration instead of scheduler 
CapacitySchedulerContext.configuration which was incorrect. 

TestWorkPreservingRMRestart.checkCSQueue has validations to check userMetrics 
and this step fails without this change
 //  check user metrics ***
QueueMetrics userMetrics =
queueMetrics.getUserMetrics(app.getUser());
assertMetrics(userMetrics, 1, 0, 1, 0, 2, 
availableResources.getMemorySize(),
availableResources.getVirtualCores(), usedResource.getMemorySize(),
usedResource.getVirtualCores());
{quote} 2) Several places need to reformat in 
CapacityScheduler#addApplicationOnRecovery, like too long line, blank lines, 
etc. Could you take care of them? {quote}
Will fix



> Handle recovery of applications on auto-created leaf queues
> ---
>
> Key: YARN-7643
> URL: https://issues.apache.org/jira/browse/YARN-7643
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7643.1.patch
>
>
> CapacityScheduler application recovery should auto-create leaf queue if it 
> doesnt exist. Also RMAppManager needs to set the queue-mapping placement 
> context so that scheduler has necessary placement context to recreate the 
> queue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7625) Expose NM node/containers resource utilization in JVM metrics

2017-12-12 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288605#comment-16288605
 ] 

Weiwei Yang commented on YARN-7625:
---

Thank you [~jlowe] for the review and commit.

> Expose NM node/containers resource utilization in JVM metrics
> -
>
> Key: YARN-7625
> URL: https://issues.apache.org/jira/browse/YARN-7625
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 3.1.0
>
> Attachments: YARN-7625.001.patch, YARN-7625.002.patch, 
> YARN-7625.003.patch, YARN-7625.004.patch
>
>
> YARN-4055 adds node resource utilization to NM, we should expose these info 
> in NM metrics, it helps in following cases:
> # Users want to check NM load in NM web UI or via rest API
> # Provide the API to further integrated to the new yarn UI, to display NM 
> load status



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7577) Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart

2017-12-12 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7577:
-
Attachment: YARN-7577.006.patch

Fixing checkstyle

> Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart
> --
>
> Key: YARN-7577
> URL: https://issues.apache.org/jira/browse/YARN-7577
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7577.000.patch, YARN-7577.001.patch, 
> YARN-7577.002.patch, YARN-7577.003.patch, YARN-7577.004.patch, 
> YARN-7577.005.patch, YARN-7577.006.patch
>
>
> This happens, if Fair Scheduler is the default. The test should run with both 
> schedulers
> {code}
> java.lang.AssertionError: 
> Expected :-102
> Actual   :-106
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testPreemptedAMRestartOnRMRestart(TestAMRestart.java:583)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7590) Improve container-executor validation check

2017-12-12 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288599#comment-16288599
 ] 

Miklos Szegedi commented on YARN-7590:
--

[~eyang], the first line of {{main()}} calls {{assert_valid_setup()}} that 
calls {{setuid(0)}}. You need to sample the yarn uid with {{getuid()}} and 
store before this call to avoid the following error:
{code}
515 uid 2002 gid 2002 euid 0 egid 2002
517 uid 0 gid 2002 euid 0 egid 2002
main : command provided 0
main : run as user is nobody
main : requested yarn user is foo
521 uid 0 gid 2002 euid 0 egid 2002
556 uid 0 gid 2002 euid 0 egid 2002
uid 0 gid 2002 euid 0 egid 2002
558 uid 0 gid 2002 euid 99 egid 99
Permission mismatch for /tmp/hadoop-foo/nm-local-dir for uid: 0.
{code}


> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7590.001.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files 
> ownership:
> {code}
> /usr/local/hadoop/bin/container-executor spark yarn 0 etc /home/yarn/tokens 
> /home/spark / ls
> {code}
> This will change /etc to be owned by spark user:
> {code}
> # ls -ld /etc
> drwxr-s---. 110 spark hadoop 8192 Nov 21 20:00 /etc
> {code}
> Spark user can rewrite /etc files to gain more access.  We can improve this 
> with additional check in container-executor:
> # Make sure the prefix path is same as the one in yarn-site.xml, and 
> yarn-site.xml is owned by root, 644, and marked as final in property.
> # Make sure the user path is not a symlink, usercache is not a symlink.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7646) MR job (based on old version tarball) get failed due to incompatible resource request

2017-12-12 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-7646:
--
Target Version/s: 3.0.1  (was: 3.0.0)

> MR job (based on old version tarball) get failed due to incompatible resource 
> request
> -
>
> Key: YARN-7646
> URL: https://issues.apache.org/jira/browse/YARN-7646
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Junping Du
>Priority: Blocker
>
> With quick workaround with fixing HDFS-12920 (set non time unit to 
> hdfs-site.xml), the job still get failed with following error:
> {noformat}
> 2017-12-12 16:39:13,105 ERROR [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: ERROR IN CONTACTING RM. 
> org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid 
> resource request, requested memory < 0, or requested memory > max configured, 
> requestedMemory=-1, maxMemory=8192
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:275)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:240)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndvalidateRequest(SchedulerUtils.java:256)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.normalizeAndValidateRequests(RMServerUtils.java:246)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.DefaultAMSProcessor.allocate(DefaultAMSProcessor.java:217)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AMSProcessingChain.allocate(AMSProcessingChain.java:92)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:388)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateYarnException(RPCUtil.java:75)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:116)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:79)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>   at com.sun.proxy.$Proxy81.allocate(Unknown Source)
>   at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor.makeRemoteRequest(RMContainerRequestor.java:206)
>   at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.java:783)
>   at 
> org.apache.hadoop.mapreduce.v2.

[jira] [Commented] (YARN-7646) MR job (based on old version tarball) get failed due to incompatible resource request

2017-12-12 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288572#comment-16288572
 ] 

Andrew Wang commented on YARN-7646:
---

At this point we just need to get 3.0.0 out, if there are compat issues let's 
address them in 3.0.1.

> MR job (based on old version tarball) get failed due to incompatible resource 
> request
> -
>
> Key: YARN-7646
> URL: https://issues.apache.org/jira/browse/YARN-7646
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Junping Du
>Priority: Blocker
>
> With quick workaround with fixing HDFS-12920 (set non time unit to 
> hdfs-site.xml), the job still get failed with following error:
> {noformat}
> 2017-12-12 16:39:13,105 ERROR [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: ERROR IN CONTACTING RM. 
> org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid 
> resource request, requested memory < 0, or requested memory > max configured, 
> requestedMemory=-1, maxMemory=8192
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:275)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:240)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndvalidateRequest(SchedulerUtils.java:256)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.normalizeAndValidateRequests(RMServerUtils.java:246)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.DefaultAMSProcessor.allocate(DefaultAMSProcessor.java:217)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AMSProcessingChain.allocate(AMSProcessingChain.java:92)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:388)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateYarnException(RPCUtil.java:75)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:116)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:79)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>   at com.sun.proxy.$Proxy81.allocate(Unknown Source)
>   at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor.makeRemoteRequest(RMContainerRequestor.java:206)
>   at 
> org.apache.hadoop.mapreduce.v2

[jira] [Commented] (YARN-7646) MR job (based on old version tarball) get failed due to incompatible resource request

2017-12-12 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288566#comment-16288566
 ] 

Junping Du commented on YARN-7646:
--

CC [~andrew.wang] [~vinodkv], [~jlowe].

> MR job (based on old version tarball) get failed due to incompatible resource 
> request
> -
>
> Key: YARN-7646
> URL: https://issues.apache.org/jira/browse/YARN-7646
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Junping Du
>Priority: Blocker
>
> With quick workaround with fixing HDFS-12920 (set non time unit to 
> hdfs-site.xml), the job still get failed with following error:
> {noformat}
> 2017-12-12 16:39:13,105 ERROR [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: ERROR IN CONTACTING RM. 
> org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid 
> resource request, requested memory < 0, or requested memory > max configured, 
> requestedMemory=-1, maxMemory=8192
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:275)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:240)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndvalidateRequest(SchedulerUtils.java:256)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.normalizeAndValidateRequests(RMServerUtils.java:246)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.DefaultAMSProcessor.allocate(DefaultAMSProcessor.java:217)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AMSProcessingChain.allocate(AMSProcessingChain.java:92)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:388)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateYarnException(RPCUtil.java:75)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:116)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:79)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>   at com.sun.proxy.$Proxy81.allocate(Unknown Source)
>   at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor.makeRemoteRequest(RMContainerRequestor.java:206)
>   at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.j

[jira] [Created] (YARN-7646) MR job (based on old version tarball) get failed due to incompatible resource request

2017-12-12 Thread Junping Du (JIRA)
Junping Du created YARN-7646:


 Summary: MR job (based on old version tarball) get failed due to 
incompatible resource request
 Key: YARN-7646
 URL: https://issues.apache.org/jira/browse/YARN-7646
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Reporter: Junping Du
Priority: Blocker


With quick workaround with fixing HDFS-12920 (set non time unit to 
hdfs-site.xml), the job still get failed with following error:
{noformat}
2017-12-12 16:39:13,105 ERROR [RMCommunicator Allocator] 
org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: ERROR IN CONTACTING RM. 
org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid 
resource request, requested memory < 0, or requested memory > max configured, 
requestedMemory=-1, maxMemory=8192
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:275)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:240)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndvalidateRequest(SchedulerUtils.java:256)
at 
org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.normalizeAndValidateRequests(RMServerUtils.java:246)
at 
org.apache.hadoop.yarn.server.resourcemanager.DefaultAMSProcessor.allocate(DefaultAMSProcessor.java:217)
at 
org.apache.hadoop.yarn.server.resourcemanager.AMSProcessingChain.allocate(AMSProcessingChain.java:92)
at 
org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:388)
at 
org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
at 
org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at 
org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
at 
org.apache.hadoop.yarn.ipc.RPCUtil.instantiateYarnException(RPCUtil.java:75)
at 
org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:116)
at 
org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:79)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy81.allocate(Unknown Source)
at 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor.makeRemoteRequest(RMContainerRequestor.java:206)
at 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.java:783)
at 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.heartbeat(RMContainerAllocator.java:280)
at 
org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator$AllocatorRunnable.run(RMCommunicator.java:279)
at java.lang.Thread.run(Thread.java:745)
{noformat}
It looks like incompatible change with communication between old MR client 

[jira] [Updated] (YARN-7555) Support multiple resource types in YARN native services

2017-12-12 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7555:
-
Attachment: YARN-7555.005.patch

Attached ver.5 patch, fixed warning. Thanks reviews from [~jianhe]

> Support multiple resource types in YARN native services
> ---
>
> Key: YARN-7555
> URL: https://issues.apache.org/jira/browse/YARN-7555
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7555.003.patch, YARN-7555.004.patch, 
> YARN-7555.005.patch, YARN-7555.wip-001.patch
>
>
> We need to support specifying multiple resource type in addition to 
> memory/cpu in YARN native services



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7619) Max AM Resource value in CS UI is different for every user

2017-12-12 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-7619:
-
Attachment: YARN-7619.001.patch

Uploading patch 001. This is not a perfect solution, but it's close. The 
pre-weighted AM limit for all users in a particular queue is calculated in 
{{LeafQueue#getUserAMResourceLimitPerPartition}} and passed to the UI via the 
{{UserInfo}} object for each user when the UI is rendered. This is a little 
awkward because the AM Limit for users in the queue is a per-queue value, but 
when rendering, I wanted to multiply the value by each users' weight. 

The value displayed on the UI in the Max AM Resource may not always be valid 
for weighted users because it is not normalized, and it may be more than the 
queue-level AM limit on the high end if the weight is large. But since this is 
only for display purposes, I think it's acceptable.

> Max AM Resource value in CS UI is different for every user
> --
>
> Key: YARN-7619
> URL: https://issues.apache.org/jira/browse/YARN-7619
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, yarn
>Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2, 3.1.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: Max AM Resources is Different for Each User.png, 
> YARN-7619.001.patch
>
>
> YARN-7245 addressed the problem that the {{Max AM Resource}} in the capacity 
> scheduler UI used to contain the queue-level AM limit instead of the 
> user-level AM limit. It fixed this by using the user-specific AM limit that 
> is calculated in {{LeafQueue#activateApplications}}, stored in each user's 
> {{LeafQueue#User}} object, and retrieved via 
> {{UserInfo#getResourceUsageInfo}}.
> The problem is that this user-specific AM limit depends on the activity of 
> other users and other applications in a queue, and it is only calculated and 
> updated when a user's application is activated. So, when 
> {{CapacitySchedulerPage}} retrieves the user-specific AM limit, it is a stale 
> value unless an application was recently activated for a particular user.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7633) [Documentation] Add documentation for auto queue creation feature and related configurations

2017-12-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288514#comment-16288514
 ] 

Wangda Tan commented on YARN-7633:
--

Thanks [~suma.shivaprasad], only one minor comment: 

"Users or groups might submit applications to the auto-created leaf queues for 
a limited time and stop using them. Hence there could be more number of leaf 
queues auto-created under the parent queue than its guaranteed capacity. The 
current policy implementation allots either configured or zero capacity on a 
best-effort basis based on availability of capacity on the parent queue and the 
application submission order across leaf queues."

Should be moved to 
{{yarn.scheduler.capacity..auto-create-child-queue.management-policy}}
 section.

> [Documentation] Add documentation for auto queue creation feature and related 
> configurations
> 
>
> Key: YARN-7633
> URL: https://issues.apache.org/jira/browse/YARN-7633
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7633.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7643) Handle recovery of applications on auto-created leaf queues

2017-12-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288509#comment-16288509
 ] 

Wangda Tan commented on YARN-7643:
--

[~suma.shivaprasad], thanks for working on the fix. Comments: 

1) Why AbstractCSQueue changes required? (And it doesn't look correct for queue 
refresh cases.)

2) Several places need to reformat in 
CapacityScheduler#addApplicationOnRecovery, like too long line, blank lines, 
etc. Could you take care of them?

> Handle recovery of applications on auto-created leaf queues
> ---
>
> Key: YARN-7643
> URL: https://issues.apache.org/jira/browse/YARN-7643
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7643.1.patch
>
>
> CapacityScheduler application recovery should auto-create leaf queue if it 
> doesnt exist. Also RMAppManager needs to set the queue-mapping placement 
> context so that scheduler has necessary placement context to recreate the 
> queue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5418) When partial log aggregation is enabled, display the list of aggregated files on the container log page

2017-12-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288501#comment-16288501
 ] 

Wangda Tan commented on YARN-5418:
--

Thanks [~xgong], in general patch looks good except few minor comments: 

1. getLogStartTime/getLogEndTime : {{Block html}} param is not necessary.

2.
{code}
  try {
fileController = this.factory.getFileControllerForRead(
appId, $(APP_OWNER));
foundAggregatedLogs = true;
  } catch (Exception fnf) {
// Do Nothing
  }
{code}

It's better to catch IOException. 

3. Findbugs / java docs. 

> When partial log aggregation is enabled, display the list of aggregated files 
> on the container log page
> ---
>
> Key: YARN-5418
> URL: https://issues.apache.org/jira/browse/YARN-5418
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Attachments: Screen Shot 2017-03-06 at 1.38.04 PM.png, 
> YARN-5418.1.patch, YARN-5418.2.patch, YARN-5418.3.patch, 
> YARN-5418.4.branch-2.patch, YARN-5418.branch-2.4.patch, 
> YARN-5418.trunk.4.patch
>
>
> The container log pages lists all files. However, as soon as a file gets 
> aggregated - it's no longer available on this listing page.
> It will be useful to list aggregated files as well as the current set of 
> files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7645) TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers is flakey with FairScheduler

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288470#comment-16288470
 ] 

genericqa commented on YARN-7645:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 33s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
57s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7645 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901769/YARN-7645.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 858bc1266d1b 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 06f0eb2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18893/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18893/testReport/ |
| Max. process+thread count | 878 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-pro

[jira] [Assigned] (YARN-7598) Document how to use classpath isolation for aux-services in YARN

2017-12-12 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong reassigned YARN-7598:
---

Assignee: Xuan Gong

> Document how to use classpath isolation for aux-services in YARN
> 
>
> Key: YARN-7598
> URL: https://issues.apache.org/jira/browse/YARN-7598
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7595) Container launching code suppresses close exceptions after writes

2017-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288383#comment-16288383
 ] 

Hudson commented on YARN-7595:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13363 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13363/])
YARN-7595. Container launching code suppresses close exceptions after (jlowe: 
rev 2abab1d7c53e64c160384fd5a3ac4cd8ffa57af4)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java


> Container launching code suppresses close exceptions after writes
> -
>
> Key: YARN-7595
> URL: https://issues.apache.org/jira/browse/YARN-7595
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Jason Lowe
>Assignee: Jim Brennan
> Fix For: 3.1.0, 3.0.1
>
> Attachments: YARN-7595.001.patch, YARN-7595.002.patch, 
> YARN-7595.003.patch
>
>
> There are a number of places in code related to container launching where the 
> following pattern is used:
> {code}
>   try {
> ...write to stream outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> Unfortunately this suppresses any IOException that occurs during the close() 
> method on outStream.  If the stream is buffered or could otherwise fail to 
> finish writing the file when trying to close then this can lead to 
> partial/corrupted data without throwing an I/O error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7595) Container launching code suppresses close exceptions after writes

2017-12-12 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288371#comment-16288371
 ] 

Jim Brennan commented on YARN-7595:
---

Will do.  Thanks!


> Container launching code suppresses close exceptions after writes
> -
>
> Key: YARN-7595
> URL: https://issues.apache.org/jira/browse/YARN-7595
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Jason Lowe
>Assignee: Jim Brennan
> Fix For: 3.1.0, 3.0.1
>
> Attachments: YARN-7595.001.patch, YARN-7595.002.patch, 
> YARN-7595.003.patch
>
>
> There are a number of places in code related to container launching where the 
> following pattern is used:
> {code}
>   try {
> ...write to stream outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> Unfortunately this suppresses any IOException that occurs during the close() 
> method on outStream.  If the stream is buffered or could otherwise fail to 
> finish writing the file when trying to close then this can lead to 
> partial/corrupted data without throwing an I/O error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7540) Convert yarn app cli to call yarn api services

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288362#comment-16288362
 ] 

genericqa commented on YARN-7540:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 58s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 59 unchanged - 1 fixed = 62 total (was 60) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 56s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
44s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestAMRMClientOnRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7540 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901759/YARN-7540.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| u

[jira] [Commented] (YARN-7595) Container launching code suppresses close exceptions after writes

2017-12-12 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288348#comment-16288348
 ] 

Jason Lowe commented on YARN-7595:
--

Thanks for updating the patch!  The unit test failure is unrelated and tracked 
by YARN-7629.

+1 lgtm.  Committing this.


> Container launching code suppresses close exceptions after writes
> -
>
> Key: YARN-7595
> URL: https://issues.apache.org/jira/browse/YARN-7595
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Jason Lowe
>Assignee: Jim Brennan
> Attachments: YARN-7595.001.patch, YARN-7595.002.patch, 
> YARN-7595.003.patch
>
>
> There are a number of places in code related to container launching where the 
> following pattern is used:
> {code}
>   try {
> ...write to stream outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> Unfortunately this suppresses any IOException that occurs during the close() 
> method on outStream.  If the stream is buffered or could otherwise fail to 
> finish writing the file when trying to close then this can lead to 
> partial/corrupted data without throwing an I/O error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288333#comment-16288333
 ] 

Hudson commented on YARN-7565:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13362 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13362/])
YARN-7565. Yarn service pre-maturely releases the container after AM (jianhe: 
rev 3ebe6a7819292ce6bd557e36137531b59890c845)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/instance/ComponentInstance.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/MockServiceAM.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/Configurations.md
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/yarn/YarnRegistryAttributes.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/conf/YarnServiceConf.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestServiceAM.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/Component.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java


> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: 3.1.0
>
> Attachments: YARN-7565.001.patch, YARN-7565.002.patch, 
> YARN-7565.003.patch, YARN-7565.004.patch, YARN-7565.005.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7630) Fix AMRMToken handling in AMRMProxy

2017-12-12 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288315#comment-16288315
 ] 

Botong Huang edited comment on YARN-7630 at 12/12/17 9:44 PM:
--

I've validated the fix in our 1k node test cluster, where we found the issue 
initially. The RM key rolling interval is set at once every hour. After the fix 
is applied last week, there's no related issue since then. So I am pretty 
confident about it. 


was (Author: botong):
I've validated the fix in our 1k node test cluster, where we found the issue 
initially. After the fix is applied last week, there's no related issue since 
then. So I am pretty confident about it. 

> Fix AMRMToken handling in AMRMProxy
> ---
>
> Key: YARN-7630
> URL: https://issues.apache.org/jira/browse/YARN-7630
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7630.v1.patch, YARN-7630.v1.patch
>
>
> Symptom: after RM rolls over the master key for AMRMToken, whenever the RPC 
> connection from FederationInterceptor to RM breaks due to transient network 
> issue and reconnects, heartbeat to RM starts failing because of the “Invalid 
> AMRMToken” exception. Whenever it hits, it happens for both home RM and 
> secondary RMs. 
> Related facts: 
> 1. When RM issues a new AMRMToken, it always send with service name field as 
> empty string. RPC layer in AM side will set it properly before start using 
> it. 
> 2. UGI keeps all tokens using a map from serviceName->Token. Initially 
> AMRMClientUtils.createRMProxy() is used to load the first token and start the 
> RM connection. 
> 3. When RM renew the token, YarnServerSecurityUtils.updateAMRMToken() is used 
> to load it into UGI and replace the existing token (with the same serviceName 
> key). 
> Bug: 
> The bug is that 2-AMRMClientUtils.createRMProxy() and 
> 3-YarnServerSecurityUtils.updateAMRMToken() is not handling the sequence 
> consistently. We always need to load the token (with empty service name) into 
> UGI first before we set the serviceName, so that the previous AMRMToken will 
> be overridden. But 2 is doing it reversely. That’s why after RM rolls the 
> amrmToken, the UGI end up with two tokens. Whenever the RPC connection break 
> and reconnect, the wrong token could be picked and thus trigger the 
> exception. 
> Fix: 
> Should load the AMRMToken into UGI first and then update the service name 
> field for RPC



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7630) Fix AMRMToken handling in AMRMProxy

2017-12-12 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288315#comment-16288315
 ] 

Botong Huang commented on YARN-7630:


I've validated the fix in our 1k node test cluster, where we found the issue 
initially. After the fix is applied last week, there's no related issue since 
then. So I am pretty confident about it. 

> Fix AMRMToken handling in AMRMProxy
> ---
>
> Key: YARN-7630
> URL: https://issues.apache.org/jira/browse/YARN-7630
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7630.v1.patch, YARN-7630.v1.patch
>
>
> Symptom: after RM rolls over the master key for AMRMToken, whenever the RPC 
> connection from FederationInterceptor to RM breaks due to transient network 
> issue and reconnects, heartbeat to RM starts failing because of the “Invalid 
> AMRMToken” exception. Whenever it hits, it happens for both home RM and 
> secondary RMs. 
> Related facts: 
> 1. When RM issues a new AMRMToken, it always send with service name field as 
> empty string. RPC layer in AM side will set it properly before start using 
> it. 
> 2. UGI keeps all tokens using a map from serviceName->Token. Initially 
> AMRMClientUtils.createRMProxy() is used to load the first token and start the 
> RM connection. 
> 3. When RM renew the token, YarnServerSecurityUtils.updateAMRMToken() is used 
> to load it into UGI and replace the existing token (with the same serviceName 
> key). 
> Bug: 
> The bug is that 2-AMRMClientUtils.createRMProxy() and 
> 3-YarnServerSecurityUtils.updateAMRMToken() is not handling the sequence 
> consistently. We always need to load the token (with empty service name) into 
> UGI first before we set the serviceName, so that the previous AMRMToken will 
> be overridden. But 2 is doing it reversely. That’s why after RM rolls the 
> amrmToken, the UGI end up with two tokens. Whenever the RPC connection break 
> and reconnect, the wrong token could be picked and thus trigger the 
> exception. 
> Fix: 
> Should load the AMRMToken into UGI first and then update the service name 
> field for RPC



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-12 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7565:
--
Fix Version/s: 3.1.0

> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: 3.1.0
>
> Attachments: YARN-7565.001.patch, YARN-7565.002.patch, 
> YARN-7565.003.patch, YARN-7565.004.patch, YARN-7565.005.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-12 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7565:
--
Fix Version/s: (was: yarn-native-services)

> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: 3.1.0
>
> Attachments: YARN-7565.001.patch, YARN-7565.002.patch, 
> YARN-7565.003.patch, YARN-7565.004.patch, YARN-7565.005.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7642) Container execution type is not updated after promotion/demotion in NMContext

2017-12-12 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7642:
--
Priority: Critical  (was: Major)

> Container execution type is not updated after promotion/demotion in NMContext
> -
>
> Key: YARN-7642
> URL: https://issues.apache.org/jira/browse/YARN-7642
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
>
> Found this bug while working on YARN-7617. After calling API to promote a 
> container from OPPORTUNISTIC to GUARANTEED, node manager web page still shows 
> the container execution type as OPPORTUNISTIC. Looks like the container 
> execution type in NMContext was not updated accordingly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7642) Container execution type is not updated after promotion/demotion in NMContext

2017-12-12 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288301#comment-16288301
 ] 

Arun Suresh commented on YARN-7642:
---

Thanks for raising this [~cheersyang].
Upgrading to critical, since it would also cause problems when we are picking 
containers to kill
Yup, what you've proposed should fix this. Looking forward to a patch.

> Container execution type is not updated after promotion/demotion in NMContext
> -
>
> Key: YARN-7642
> URL: https://issues.apache.org/jira/browse/YARN-7642
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
>
> Found this bug while working on YARN-7617. After calling API to promote a 
> container from OPPORTUNISTIC to GUARANTEED, node manager web page still shows 
> the container execution type as OPPORTUNISTIC. Looks like the container 
> execution type in NMContext was not updated accordingly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7645) TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers is flakey with FairScheduler

2017-12-12 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-7645:

Attachment: YARN-7645.001.patch

The patch add a heartbeat after the scheduler update.

> TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers is 
> flakey with FairScheduler
> -
>
> Key: YARN-7645
> URL: https://issues.apache.org/jira/browse/YARN-7645
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-7645.001.patch
>
>
> We've noticed some flakiness in 
> {{TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers}} 
> when using {{FairScheduler}}:
> {noformat}
> java.lang.AssertionError: Attempt state is not correct (timeout). 
> expected: but was:
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestContainerResourceUsage.amRestartTests(TestContainerResourceUsage.java:275)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestContainerResourceUsage.testUsageAfterAMRestartWithMultipleContainers(TestContainerResourceUsage.java:254)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7645) TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers is flakey with FairScheduler

2017-12-12 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288290#comment-16288290
 ] 

Robert Kanter commented on YARN-7645:
-

When the test passes, we see this sequence of log messages:
{noformat}
2017-12-11 11:21:36,837 INFO  [AsyncDispatcher event handler] 
attempt.RMAppAttemptImpl (RMAppAttemptImpl.java:handle(919)) - 
appattempt_1513020094849_0001_01 State change from SUBMITTED to SCHEDULED 
on event = ATTEMPT_ADDED
2017-12-11 11:21:36,837 DEBUG [AsyncDispatcher event handler] 
event.AsyncDispatcher (AsyncDispatcher.java:dispatch(188)) - Dispatching the 
event 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppNodeUpdateEvent.EventType:
 NODE_UPDATE
2017-12-11 11:21:36,837 DEBUG [AsyncDispatcher event handler] rmapp.RMAppImpl 
(RMAppImpl.java:handle(870)) - Processing event for 
application_1513020094849_0001 of type NODE_UPDATE
2017-12-11 11:21:36,837 DEBUG [AsyncDispatcher event handler] rmapp.RMAppImpl 
(RMAppImpl.java:processNodeUpdate(986)) - Received node update 
event:NODE_USABLE for node:127.0.0.1:1234 with state:RUNNING
2017-12-11 11:21:36,837 INFO  [Thread-1] resourcemanager.MockRM 
(MockRM.java:waitForState(283)) - App State is : ACCEPTED
2017-12-11 11:21:36,838 INFO  [Thread-1] resourcemanager.MockRM 
(MockRM.java:waitForState(357)) - Attempt State is : SCHEDULED
2017-12-11 11:21:36,838 INFO  [Thread-1] resourcemanager.MockRM 
(MockRM.java:launchAM(1168)) - Launch AM appattempt_1513020094849_0001_01
2017-12-11 11:21:36,979 DEBUG [AsyncDispatcher event handler] 
event.AsyncDispatcher (AsyncDispatcher.java:dispatch(188)) - Dispatching the 
event 
org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType:
 STATUS_UPDATE
2017-12-11 11:21:36,979 DEBUG [Thread-1] fair.FSLeafQueue 
(FSLeafQueue.java:updateDemand(322)) - The updated demand for root.default is 
; the max is 
2017-12-11 11:21:36,979 DEBUG [Thread-1] fair.FSLeafQueue 
(FSLeafQueue.java:updateDemand(324)) - The updated fairshare for root.default 
is 
2017-12-11 11:21:36,979 DEBUG [Thread-1] fair.FSParentQueue 
(FSParentQueue.java:updateDemand(133)) - Counting resource from root.default 
; Total resource demand for root now 
2017-12-11 11:21:36,979 DEBUG [AsyncDispatcher event handler] rmnode.RMNodeImpl 
(RMNodeImpl.java:handle(666)) - Processing 127.0.0.1:1234 of type STATUS_UPDATE
2017-12-11 11:21:36,985 DEBUG [AsyncDispatcher event handler] 
event.AsyncDispatcher (AsyncDispatcher.java:dispatch(188)) - Dispatching the 
event 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType:
 NODE_UPDATE
2017-12-11 11:21:36,986 DEBUG [Thread-1] fair.FSLeafQueue 
(FSLeafQueue.java:updateDemand(322)) - The updated demand for root.user is 
; the max is 
2017-12-11 11:21:36,986 DEBUG [Thread-1] fair.FSLeafQueue 
(FSLeafQueue.java:updateDemand(324)) - The updated fairshare for root.user is 

2017-12-11 11:21:36,986 DEBUG [Thread-1] fair.FSParentQueue 
(FSParentQueue.java:updateDemand(133)) - Counting resource from root.user 
; Total resource demand for root now 
2017-12-11 11:21:36,986 DEBUG [Thread-1] fair.FSParentQueue 
(FSParentQueue.java:updateDemand(144)) - The updated demand for root is 
; the max is 
2017-12-11 11:21:36,986 DEBUG [Thread-1] fair.FSQueue 
(FSQueue.java:setFairShare(293)) - The updated fairShare for root is 

2017-12-11 11:21:36,987 DEBUG [AsyncDispatcher event handler] 
scheduler.AbstractYarnScheduler (AbstractYarnScheduler.java:nodeUpdate(1083)) - 
nodeUpdate: 127.0.0.1:1234 cluster capacity: 
2017-12-11 11:21:36,987 DEBUG [AsyncDispatcher event handler] 
scheduler.AbstractYarnScheduler (AbstractYarnScheduler.java:nodeUpdate(1116)) - 
Node being looked for scheduling 127.0.0.1:1234 availableResource: 

2017-12-11 11:21:36,988 DEBUG [AsyncDispatcher event handler] fair.FSLeafQueue 
(FSLeafQueue.java:assignContainer(333)) - Node 127.0.0.1 offered to queue: 
root.user fairShare: 
2017-12-11 11:21:37,049 DEBUG [AsyncDispatcher event handler] 
scheduler.AppSchedulingInfo 
(AppSchedulingInfo.java:updateMetricsForAllocatedContainer(589)) - allocate: 
applicationId=application_1513020094849_0001 
container=container_1513020094849_0001_01_01 host=127.0.0.1:1234 user=user 
resource= type=OFF_SWITCH
{noformat}
When it fails, we see this:
{noformat}
2017-12-08 11:58:46,248 INFO  [AsyncDispatcher event handler] 
attempt.RMAppAttemptImpl (RMAppAttemptImpl.java:handle(919)) - 
appattempt_1512763125850_0001_01 State change from SUBMITTED to SCHEDULED 
on event = ATTEMPT_ADDED
2017-12-08 11:58:46,249 DEBUG [AsyncDispatcher event handler] 
event.AsyncDispatcher (AsyncDispatcher.java:dispatch(188)) - Dispatching the 
event 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppNodeUpdateEvent.EventType:
 NODE_UPDATE
2017-12-08 11:58:46,249 DEBUG [AsyncDispatcher event handler] rmapp.RMAppImpl 
(RMAppImpl.java:handle(870)) - Processin

[jira] [Created] (YARN-7645) TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers is flakey with FairScheduler

2017-12-12 Thread Robert Kanter (JIRA)
Robert Kanter created YARN-7645:
---

 Summary: 
TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers is 
flakey with FairScheduler
 Key: YARN-7645
 URL: https://issues.apache.org/jira/browse/YARN-7645
 Project: Hadoop YARN
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Robert Kanter
Assignee: Robert Kanter


We've noticed some flakiness in 
{{TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers}} 
when using {{FairScheduler}}:
{noformat}
java.lang.AssertionError: Attempt state is not correct (timeout). 
expected: but was:
at 
org.apache.hadoop.yarn.server.resourcemanager.TestContainerResourceUsage.amRestartTests(TestContainerResourceUsage.java:275)
at 
org.apache.hadoop.yarn.server.resourcemanager.TestContainerResourceUsage.testUsageAfterAMRestartWithMultipleContainers(TestContainerResourceUsage.java:254)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7540) Convert yarn app cli to call yarn api services

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288271#comment-16288271
 ] 

genericqa commented on YARN-7540:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 58s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 59 unchanged - 0 fixed = 61 total (was 59) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 51s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
55s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestAMRMClientOnRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7540 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901752/YARN-7540.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| u

[jira] [Commented] (YARN-7630) Fix AMRMToken handling in AMRMProxy

2017-12-12 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288270#comment-16288270
 ] 

Arun Suresh commented on YARN-7630:
---

I understand this might be difficult to write a test-case. [~botong], can you 
detail the manual test steps ?
+1 pending that

> Fix AMRMToken handling in AMRMProxy
> ---
>
> Key: YARN-7630
> URL: https://issues.apache.org/jira/browse/YARN-7630
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7630.v1.patch, YARN-7630.v1.patch
>
>
> Symptom: after RM rolls over the master key for AMRMToken, whenever the RPC 
> connection from FederationInterceptor to RM breaks due to transient network 
> issue and reconnects, heartbeat to RM starts failing because of the “Invalid 
> AMRMToken” exception. Whenever it hits, it happens for both home RM and 
> secondary RMs. 
> Related facts: 
> 1. When RM issues a new AMRMToken, it always send with service name field as 
> empty string. RPC layer in AM side will set it properly before start using 
> it. 
> 2. UGI keeps all tokens using a map from serviceName->Token. Initially 
> AMRMClientUtils.createRMProxy() is used to load the first token and start the 
> RM connection. 
> 3. When RM renew the token, YarnServerSecurityUtils.updateAMRMToken() is used 
> to load it into UGI and replace the existing token (with the same serviceName 
> key). 
> Bug: 
> The bug is that 2-AMRMClientUtils.createRMProxy() and 
> 3-YarnServerSecurityUtils.updateAMRMToken() is not handling the sequence 
> consistently. We always need to load the token (with empty service name) into 
> UGI first before we set the serviceName, so that the previous AMRMToken will 
> be overridden. But 2 is doing it reversely. That’s why after RM rolls the 
> amrmToken, the UGI end up with two tokens. Whenever the RPC connection break 
> and reconnect, the wrong token could be picked and thus trigger the 
> exception. 
> Fix: 
> Should load the AMRMToken into UGI first and then update the service name 
> field for RPC



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7540) Convert yarn app cli to call yarn api services

2017-12-12 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7540:

Attachment: YARN-7540.006.patch

> Convert yarn app cli to call yarn api services
> --
>
> Key: YARN-7540
> URL: https://issues.apache.org/jira/browse/YARN-7540
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7540.001.patch, YARN-7540.002.patch, 
> YARN-7540.003.patch, YARN-7540.004.patch, YARN-7540.005.patch, 
> YARN-7540.006.patch
>
>
> For YARN docker application to launch through CLI, it works differently from 
> launching through REST API.  All application launched through REST API is 
> currently stored in yarn user HDFS home directory.  Application managed 
> through CLI are stored into individual user's HDFS home directory.  For 
> consistency, we want to have yarn app cli to interact with API service to 
> manage applications.  For performance reason, it is easier to implement list 
> all applications from one user's home directory instead of crawling all 
> user's home directories.  For security reason, it is safer to access only one 
> user home directory instead of all users.  Given the reasons above, the 
> proposal is to change how {{yarn app -launch}}, {{yarn app -list}} and {{yarn 
> app -destroy}} work.  Instead of calling HDFS API and RM API to launch 
> containers, CLI will be converted to call API service REST API resides in RM. 
>  RM perform the persist and operations to launch the actual application.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288197#comment-16288197
 ] 

genericqa commented on YARN-7565:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 97 unchanged - 1 fixed = 97 total (was 98) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
39s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 27s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
1s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.i

[jira] [Commented] (YARN-6315) Improve LocalResourcesTrackerImpl#isResourcePresent to return false for corrupted files

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288162#comment-16288162
 ] 

genericqa commented on YARN-6315:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
15s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 10s{color} | {color:orange} root: The patch generated 1 new + 244 unchanged 
- 1 fixed = 245 total (was 245) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
5s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
11s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 24s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}135m  5s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
42s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}274m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch |
|   | hadoop.mapreduce.v2.T

[jira] [Updated] (YARN-7540) Convert yarn app cli to call yarn api services

2017-12-12 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7540:

Attachment: YARN-7540.005.patch

- Fixed flex and save operation to map correctly to REST API.

> Convert yarn app cli to call yarn api services
> --
>
> Key: YARN-7540
> URL: https://issues.apache.org/jira/browse/YARN-7540
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7540.001.patch, YARN-7540.002.patch, 
> YARN-7540.003.patch, YARN-7540.004.patch, YARN-7540.005.patch
>
>
> For YARN docker application to launch through CLI, it works differently from 
> launching through REST API.  All application launched through REST API is 
> currently stored in yarn user HDFS home directory.  Application managed 
> through CLI are stored into individual user's HDFS home directory.  For 
> consistency, we want to have yarn app cli to interact with API service to 
> manage applications.  For performance reason, it is easier to implement list 
> all applications from one user's home directory instead of crawling all 
> user's home directories.  For security reason, it is safer to access only one 
> user home directory instead of all users.  Given the reasons above, the 
> proposal is to change how {{yarn app -launch}}, {{yarn app -list}} and {{yarn 
> app -destroy}} work.  Instead of calling HDFS API and RM API to launch 
> containers, CLI will be converted to call API service REST API resides in RM. 
>  RM perform the persist and operations to launch the actual application.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7555) Support multiple resource types in YARN native services

2017-12-12 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288123#comment-16288123
 ] 

Jian He commented on YARN-7555:
---

- unused import in import in Resource.java and TestServiceAM
- accidental change in first line of YarnServiceAPI.md
- and findbugs warnings
other than those, lgtm

> Support multiple resource types in YARN native services
> ---
>
> Key: YARN-7555
> URL: https://issues.apache.org/jira/browse/YARN-7555
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7555.003.patch, YARN-7555.004.patch, 
> YARN-7555.wip-001.patch
>
>
> We need to support specifying multiple resource type in addition to 
> memory/cpu in YARN native services



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7625) Expose NM node/containers resource utilization in JVM metrics

2017-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288114#comment-16288114
 ] 

Hudson commented on YARN-7625:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13361 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13361/])
YARN-7625. Expose NM node/containers resource utilization in JVM (jlowe: rev 
06f0eb2dce2a7a098f7844682ea6c232d0ddb0be)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/TestContainersMonitor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/BaseAMRMProxyTest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/Context.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/MockResourceCalculatorPlugin.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeResourceMonitorImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeResourceMonitor.java


> Expose NM node/containers resource utilization in JVM metrics
> -
>
> Key: YARN-7625
> URL: https://issues.apache.org/jira/browse/YARN-7625
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 3.1.0
>
> Attachments: YARN-7625.001.patch, YARN-7625.002.patch, 
> YARN-7625.003.patch, YARN-7625.004.patch
>
>
> YARN-4055 adds node resource utilization to NM, we should expose these info 
> in NM metrics, it helps in following cases:
> # Users want to check NM load in NM web UI or via rest API
> # Provide the API to further integrated to the new yarn UI, to display NM 
> load status



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7644) NM gets backed up deleting docker containers

2017-12-12 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-7644:
-
Component/s: (was: yarn)
 nodemanager

> NM gets backed up deleting docker containers
> 
>
> Key: YARN-7644
> URL: https://issues.apache.org/jira/browse/YARN-7644
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Eric Badger
>Assignee: Eric Badger
>
> We are sending a {{docker stop}} to the docker container with a timeout of 10 
> seconds when we shut down a container. If the container does not stop after 
> 10 seconds then we force kill it. However, the {{docker stop}} command is a 
> blocking call. So in cases where lots of containers don't go down with the 
> initial SIGTERM, we have to wait 10+ seconds for the {{docker stop}} to 
> return. This ties up the ContainerLaunch handler and so these kill events 
> back up. It also appears to be backing up new container launches as well. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7644) NM gets backed up deleting docker containers

2017-12-12 Thread Eric Badger (JIRA)
Eric Badger created YARN-7644:
-

 Summary: NM gets backed up deleting docker containers
 Key: YARN-7644
 URL: https://issues.apache.org/jira/browse/YARN-7644
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Eric Badger
Assignee: Eric Badger


We are sending a {{docker stop}} to the docker container with a timeout of 10 
seconds when we shut down a container. If the container does not stop after 10 
seconds then we force kill it. However, the {{docker stop}} command is a 
blocking call. So in cases where lots of containers don't go down with the 
initial SIGTERM, we have to wait 10+ seconds for the {{docker stop}} to return. 
This ties up the ContainerLaunch handler and so these kill events back up. It 
also appears to be backing up new container launches as well. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7625) Expose NM node/containers resource utilization in JVM metrics

2017-12-12 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288069#comment-16288069
 ] 

Jason Lowe commented on YARN-7625:
--

Thanks for updating the patch!

+1 lgtm.  Committing this.


> Expose NM node/containers resource utilization in JVM metrics
> -
>
> Key: YARN-7625
> URL: https://issues.apache.org/jira/browse/YARN-7625
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: YARN-7625.001.patch, YARN-7625.002.patch, 
> YARN-7625.003.patch, YARN-7625.004.patch
>
>
> YARN-4055 adds node resource utilization to NM, we should expose these info 
> in NM metrics, it helps in following cases:
> # Users want to check NM load in NM web UI or via rest API
> # Provide the API to further integrated to the new yarn UI, to display NM 
> load status



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7242) Support specify values of different resource types in DistributedShell for easier testing

2017-12-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288063#comment-16288063
 ] 

Wangda Tan commented on YARN-7242:
--

Thanks [~GergelyNovak]. [~sunilg] could u help to review this patch? I'm a 
little packed for this week. Thanks.

> Support specify values of different resource types in DistributedShell for 
> easier testing
> -
>
> Key: YARN-7242
> URL: https://issues.apache.org/jira/browse/YARN-7242
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Gergely Novák
>Priority: Critical
>  Labels: newbie
> Attachments: YARN-7242.001.patch, YARN-7242.002.patch, 
> YARN-7242.003.patch
>
>
> Currently, DS supports specify resource profile, it's better to allow user to 
> directly specify resource keys/values from command line.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7622) Allow fair-scheduler configuration on HDFS

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288054#comment-16288054
 ] 

genericqa commented on YARN-7622:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 21m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 4 new + 39 unchanged - 2 fixed = 43 total (was 41) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 29s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 10s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector 
|
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7622 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901717/YARN-7622.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5ec82a859328 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8bb83a8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18886/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreComm

[jira] [Updated] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-12 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-7565:

Attachment: YARN-7565.005.patch

Patch 5 includes checkstyle fixes.

> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: yarn-native-services
>
> Attachments: YARN-7565.001.patch, YARN-7565.002.patch, 
> YARN-7565.003.patch, YARN-7565.004.patch, YARN-7565.005.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7242) Support specify values of different resource types in DistributedShell for easier testing

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287979#comment-16287979
 ] 

genericqa commented on YARN-7242:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell:
 The patch generated 3 new + 206 unchanged - 3 fixed = 209 total (was 209) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 33s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
13s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7242 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901725/YARN-7242.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bdc39a5cb605 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8bb83a8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18889/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18889/testReport/ |
| Max. process+thread count | 646 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-

[jira] [Commented] (YARN-7643) Handle recovery of applications on auto-created leaf queues

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287951#comment-16287951
 ] 

genericqa commented on YARN-7643:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 19 new + 251 unchanged - 0 fixed = 270 total (was 251) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 11s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7643 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901701/YARN-7643.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 017fd054883b 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8bb83a8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18885/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18885/artifact/out/patch-unit-hadoop-yarn-proje

[jira] [Commented] (YARN-7641) Allow filter on logs page

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287941#comment-16287941
 ] 

genericqa commented on YARN-7641:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 41s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7641 |
| GITHUB PR | https://github.com/apache/hadoop/pull/312 |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux b9fad6643a16 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8bb83a8 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 446 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/1/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Allow filter on logs page
> -
>
> Key: YARN-7641
> URL: https://issues.apache.org/jira/browse/YARN-7641
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Vasudevan Skm
>Assignee: Vasudevan Skm
>
> The select boxes in the Application logs page is not searchable. This doesn't 
> scale when there are many containers. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7557) It should be possible to specify resource types in the fair scheduler increment value

2017-12-12 Thread Gergo Repas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287936#comment-16287936
 ] 

Gergo Repas commented on YARN-7557:
---

[~templedf] I see, the min/max allocation scheduler config has been already 
handled. Thanks for the clarification.

> It should be possible to specify resource types in the fair scheduler 
> increment value
> -
>
> Key: YARN-7557
> URL: https://issues.apache.org/jira/browse/YARN-7557
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 3.0.0-beta1
>Reporter: Daniel Templeton
>Assignee: Gergo Repas
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7557) It should be possible to specify resource types in the fair scheduler increment value

2017-12-12 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287931#comment-16287931
 ] 

Daniel Templeton commented on YARN-7557:


YARN-7556 is about setting the max and min resources in the *queue* 
configuration.  This is about setting the increment in the *scheduler* 
configuration.  Setting the min and max in the scheduler configuration was part 
of the initial resource types work, and there is no increment in the queue 
configuration.

> It should be possible to specify resource types in the fair scheduler 
> increment value
> -
>
> Key: YARN-7557
> URL: https://issues.apache.org/jira/browse/YARN-7557
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 3.0.0-beta1
>Reporter: Daniel Templeton
>Assignee: Gergo Repas
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7557) It should be possible to specify resource types in the fair scheduler increment value

2017-12-12 Thread Gergo Repas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287888#comment-16287888
 ] 

Gergo Repas commented on YARN-7557:
---

[~templedf] I believe this should be about modifying 
{{FairSchedulerConfiguration.getIncrementAllocation()}} to handle custom 
resource types, right? I am uncertain about this because I would have expected 
YARN-7556 to also touch {{FairSchedulerConfiguration.getMinimumAllocation()}} 
and {{FairSchedulerConfiguration.getMaximumAllocation()}}, but maybe I'm 
missing something.

> It should be possible to specify resource types in the fair scheduler 
> increment value
> -
>
> Key: YARN-7557
> URL: https://issues.apache.org/jira/browse/YARN-7557
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 3.0.0-beta1
>Reporter: Daniel Templeton
>Assignee: Gergo Repas
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7242) Support specify values of different resource types in DistributedShell for easier testing

2017-12-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-7242:

Attachment: YARN-7242.003.patch

> Support specify values of different resource types in DistributedShell for 
> easier testing
> -
>
> Key: YARN-7242
> URL: https://issues.apache.org/jira/browse/YARN-7242
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Gergely Novák
>Priority: Critical
>  Labels: newbie
> Attachments: YARN-7242.001.patch, YARN-7242.002.patch, 
> YARN-7242.003.patch
>
>
> Currently, DS supports specify resource profile, it's better to allow user to 
> directly specify resource keys/values from command line.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7536) em-table filter UX issues

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287858#comment-16287858
 ] 

genericqa commented on YARN-7536:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-7536 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7536 |
| GITHUB PR | https://github.com/apache/hadoop/pull/308 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18887/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> em-table filter UX issues
> -
>
> Key: YARN-7536
> URL: https://issues.apache.org/jira/browse/YARN-7536
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Vasudevan Skm
>Assignee: Vasudevan Skm
>Priority: Minor
>  Labels: yarn-ui
>
> When the filters are rendered in YARN-ui there are some UI issues
> 1) The filters are not expanded by default
> 2) Filter section is empty even when there are 2 items to filter



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7622) Allow fair-scheduler configuration on HDFS

2017-12-12 Thread Greg Phillips (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Phillips updated YARN-7622:

Attachment: YARN-7622.003.patch

> Allow fair-scheduler configuration on HDFS
> --
>
> Key: YARN-7622
> URL: https://issues.apache.org/jira/browse/YARN-7622
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
> Attachments: YARN-7622.001.patch, YARN-7622.002.patch, 
> YARN-7622.003.patch
>
>
> The FairScheduler requires the allocation file to be hosted on the local 
> filesystem on the RM node(s). Allowing HDFS to store the allocation file will 
> provide improved redundancy, more options for scheduler updates, and RM 
> failover consistency in HA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7612) Add Placement Processor and planner framework

2017-12-12 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287790#comment-16287790
 ] 

Arun Suresh commented on YARN-7612:
---

Many thanks for the review [~pgaref]

bq. Currently PlacementConstraintsManager interface considers constraints as 
part of appIDs but we also need to support *cluster wide constraints - so I 
would add another more generic setter and getter as well
Agreed - but I think we might need an extra API that needs to be invoked by the 
cluster operator etc. I was thinking, as a first cut, lets go with intra-app 
placements. In any case, inter app placement can still be accomplished by one 
app referring to the other apps allocation tags. Also, we can flesh up the 
PlacementConstraintManager further in YARN-6596.

w.r.t the PlacementAlgorithm
bq. ..One way to solve this would be having the algorithm implementation to 
keep an extra data structure with the placed tags, another would be to extend 
the TagManager to keep a temporary mapping. 
Yup - That was my intention actually (also I think I commented the same on the 
YARN-7522). When you tackle YARN-7613, feel free to add any missing api to the 
TagsManager and to use the tagsManager for intermediate storage. You should 
also consider storing internal state in the final Algorithm implementation.

w.r.t the PlacementAlgorithm abstract class not in the spi. : So I have a 
generic Algorithm interface in the spi package (which can be used for other 
algorithms pertaining to GlobalScheuling). The placementAlgorithm would be 
specific to our processor based implementation of placement constraints.

w.r.t the enum names and the other nits: Agreed - will post a subsequent patch 
with the fix.


> Add Placement Processor and planner framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5150) [YARN-3368] Add a sunburst chart view for queues/applications resource usage to new YARN UI

2017-12-12 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287769#comment-16287769
 ] 

Sunil G commented on YARN-5150:
---

Thanks. Committing shortly

> [YARN-3368] Add a sunburst chart view for queues/applications resource usage 
> to new YARN UI
> ---
>
> Key: YARN-5150
> URL: https://issues.apache.org/jira/browse/YARN-5150
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Gergely Novák
> Attachments: Screen Shot 2017-11-17 at 12.05.28.png, 
> YARN-5150.001.patch, YARN-5150.002.patch
>
>
> An example of sunburst chart: https://bl.ocks.org/kerryrodden/7090426.
> If we can introduce it to YARN UI, admins can easily get understanding of 
> relative resource usages and configured capacities for queues/applications.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7643) Handle recovery of applications on auto-created leaf queues

2017-12-12 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7643:
---
Attachment: YARN-7643.1.patch

Attached patch for review. Added a UT to validate recovery of apps running on 
auto-created leaf queues and validations post recovery

> Handle recovery of applications on auto-created leaf queues
> ---
>
> Key: YARN-7643
> URL: https://issues.apache.org/jira/browse/YARN-7643
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7643.1.patch
>
>
> CapacityScheduler application recovery should auto-create leaf queue if it 
> doesnt exist. Also RMAppManager needs to set the queue-mapping placement 
> context so that scheduler has necessary placement context to recreate the 
> queue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6315) Improve LocalResourcesTrackerImpl#isResourcePresent to return false for corrupted files

2017-12-12 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated YARN-6315:
--
Attachment: YARN-6315.006.patch

Updated patch that fixes all but one checkstyle issues. The indentation warning 
seems trivial. Also, findbugs warning is present with and without the patch. No 
test failures were there since the test builds failed with 
{code}
[ERROR] Error occurred in starting fork, check output in log
[ERROR] Process Exit Code: 1
[ERROR] ExecutionException The forked VM terminated without properly saying 
goodbye. VM crash or System.exit called?
{code}
Hopefully this precommit will go through without this error.

> Improve LocalResourcesTrackerImpl#isResourcePresent to return false for 
> corrupted files
> ---
>
> Key: YARN-6315
> URL: https://issues.apache.org/jira/browse/YARN-6315
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.3, 2.8.1
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-6315.001.patch, YARN-6315.002.patch, 
> YARN-6315.003.patch, YARN-6315.004.patch, YARN-6315.005.patch, 
> YARN-6315.006.patch
>
>
> We currently check if a resource is present by making sure that the file 
> exists locally. There can be a case where the LocalizationTracker thinks that 
> it has the resource if the file exists but with size 0 or less than the 
> "expected" size of the LocalResource. This JIRA tracks the change to harden 
> the isResourcePresent call to address that case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7643) Handle recovery of applications on auto-created leaf queues

2017-12-12 Thread Suma Shivaprasad (JIRA)
Suma Shivaprasad created YARN-7643:
--

 Summary: Handle recovery of applications on auto-created leaf 
queues
 Key: YARN-7643
 URL: https://issues.apache.org/jira/browse/YARN-7643
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Suma Shivaprasad
Assignee: Suma Shivaprasad


CapacityScheduler application recovery should auto-create leaf queue if it 
doesnt exist. Also RMAppManager needs to set the queue-mapping placement 
context so that scheduler has necessary placement context to recreate the queue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7638) Add unit tests for Preemption

2017-12-12 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7638:
---
Summary: Add unit tests for Preemption  (was: Add unit tests for Preemption 
and Recovery)

> Add unit tests for Preemption
> -
>
> Key: YARN-7638
> URL: https://issues.apache.org/jira/browse/YARN-7638
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>
> Add unit tests to test inter leaf-queue pre-emption based on utilization and 
> work preserving start/recovery.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >