[jira] [Commented] (YARN-3931) default-node-label-expression doesn’t apply when an application is submitted by RM rest api

2017-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101122#comment-16101122
 ] 

Hadoop QA commented on YARN-3931:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 8 new + 39 unchanged - 0 fixed = 47 total (was 39) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m  6s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-3931 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878938/YARN-3931.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bdc06bf4831a 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a92bf39 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16552/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16552/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16552/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 

[jira] [Commented] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-07-25 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101119#comment-16101119
 ] 

Jian He commented on YARN-6130:
---

oh, saw it.

> [ATSv2 Security] Generate a delegation token for AM when app collector is 
> created and pass it to AM via NM and RM
> -
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch, 
> YARN-6130-YARN-5355.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6866) Minor clean-up and fixes in anticipation of YARN-2915 merge with trunk

2017-07-25 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6866:
-
Summary: Minor clean-up and fixes in anticipation of YARN-2915 merge with 
trunk  (was: Minor clean-up and fixes in anticipation of merge with trunk)

> Minor clean-up and fixes in anticipation of YARN-2915 merge with trunk
> --
>
> Key: YARN-6866
> URL: https://issues.apache.org/jira/browse/YARN-6866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Subru Krishnan
>Assignee: Botong Huang
> Attachments: YARN-6866-YARN-2915.v1.patch, 
> YARN-6866-YARN-2915.v2.patch, YARN-6866-YARN-2915.v3.patch, 
> YARN-6866-YARN-2915.v4.patch
>
>
> We have done e2e testing of YARN Federation sucessfully and we have minor 
> cleans-up like pom version upgrade, redudant "." in configuration string, 
> documentation updates etc which we want to clean up before the merge to 
> trunk. This jira tracks the fixes we did as described above to ensure proper 
> e2e run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6866) Minor clean-up and fixes in anticipation of merge with trunk

2017-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101110#comment-16101110
 ] 

Hadoop QA commented on YARN-6866:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-2915 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
36s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
44s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} YARN-2915 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6866 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878936/YARN-6866-YARN-2915.v4.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 

[jira] [Commented] (YARN-6788) Improve performance of resource profile branch

2017-07-25 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101095#comment-16101095
 ] 

Sunil G commented on YARN-6788:
---

Findbugs is unrelated and it is from branch compilation. After this patch, that 
issue is getting resolved. Hence it is not showing in patch section.

> Improve performance of resource profile branch
> --
>
> Key: YARN-6788
> URL: https://issues.apache.org/jira/browse/YARN-6788
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-6788-YARN-3926.001.patch, 
> YARN-6788-YARN-3926.002.patch, YARN-6788-YARN-3926.003.patch, 
> YARN-6788-YARN-3926.004.patch, YARN-6788-YARN-3926.005.patch, 
> YARN-6788-YARN-3926.006.patch, YARN-6788-YARN-3926.007.patch, 
> YARN-6788-YARN-3926.008.patch, YARN-6788-YARN-3926.009.patch, 
> YARN-6788-YARN-3926.010.patch, YARN-6788-YARN-3926.011.patch, 
> YARN-6788-YARN-3926.012.patch, YARN-6788-YARN-3926.013.patch
>
>
> Currently we could see a 15% performance delta with this branch. 
> Few performance improvements to improve the same.
> Also this patch will handle 
> [comments|https://issues.apache.org/jira/browse/YARN-6761?focusedCommentId=16075418=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16075418]
>  from [~leftnoteasy].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6862) Nodemanager resource usage metrics sometimes are negative

2017-07-25 Thread YunFan Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YunFan Zhou reassigned YARN-6862:
-

Assignee: YunFan Zhou

> Nodemanager resource usage metrics sometimes are negative
> -
>
> Key: YARN-6862
> URL: https://issues.apache.org/jira/browse/YARN-6862
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.2
>Reporter: YunFan Zhou
>Assignee: YunFan Zhou
>
> When we collect real-time metrics of resource usage in NM, we found those 
> values sometimes are invalid.
> For example, the following are values when collected at some point:
> "milliVcoresUsed":-5808,
> "currentPmemUsage":-1,
> "currentVmemUsage":-1,
> "cpuUsagePercentPerCore":-968.1026
> "cpuUsageTotalCoresPercentage":-24.202564,
> "pmemLimit":2147483648,
> "vmemLimit":4509715456
> There are many negative values,  there may a bug in NM. 
> We should fix it, because the real-time metrics of NM is pretty important for 
> us sometimes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6871) Add a new REST API GetAppReport call lighter and faster than the current one

2017-07-25 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101091#comment-16101091
 ] 

Sunil G commented on YARN-6871:
---

Yes [~subru]. You could use "deSelects=resourceRequest" for getApps/getApp and 
it will make your REST response lighter. Now if more attributes are needed to 
be skipped, deSelects is designed in such a way that it can support comma 
separated attributes. I think it will be better if we use thats as mentioned by 
Subru.

> Add a new REST API GetAppReport call lighter and faster than the current one
> 
>
> Key: YARN-6871
> URL: https://issues.apache.org/jira/browse/YARN-6871
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager, router
>Reporter: Giovanni Matteo Fumarola
>
> This jira tracks the effort to create a new REST API similar to the current 
> GetAppReport but lighter and faster.
> With the current one we are facing a scalability issues.
> E.g. with ~500 applications running the AppReport can reach up to 300MB in 
> size due to the {{ResourceRequest}} in the {{AppInfo}}.
> The new method returns for each application the following and essential 
> information:
> * The application id;
> * The application name;
> * The application state according to the ResourceManager - valid values are 
> members of the YarnApplicationState enum: NEW, NEW_SAVING, SUBMITTED, 
> ACCEPTED, RUNNING, FINISHED, FAILED, KILLED;
> * The final status of the application if finished - reported by the 
> application itself - valid values are: UNDEFINED, SUCCEEDED, FAILED, KILLED;
> * The web URL that can be used to track the application;
> * Detailed diagnostics information;
> * The URL of the application master container logs;
> * The nodes http address of the application master;
> * The progress of the application as a percent.
> Yarn RM will return the new result faster and it will use less compute cycles 
> to create the report and it will improve the YARN RM and Client's 
> performances.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6712) Moving logging APIs over to slf4j in hadoop-yarn-server-common

2017-07-25 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101086#comment-16101086
 ] 

Akira Ajisaka commented on YARN-6712:
-

No, I don't mind. you can split it by yourself. Thanks.

> Moving logging APIs over to slf4j in hadoop-yarn-server-common
> --
>
> Key: YARN-6712
> URL: https://issues.apache.org/jira/browse/YARN-6712
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-6712.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3931) default-node-label-expression doesn’t apply when an application is submitted by RM rest api

2017-07-25 Thread kyungwan nam (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kyungwan nam updated YARN-3931:
---
Attachment: YARN-3931.002.patch

Sorry, I missed this issue for long time.
I attached a patch including test that validate initial value when using REST 
API.

> default-node-label-expression doesn’t apply when an application is submitted 
> by RM rest api
> ---
>
> Key: YARN-3931
> URL: https://issues.apache.org/jira/browse/YARN-3931
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0
> Environment: hadoop-2.6.0
>Reporter: kyungwan nam
>Assignee: kyungwan nam
>  Labels: oct16-easy, patch
> Attachments: YARN-3931.001.patch, YARN-3931.002.patch
>
>
> * 
> yarn.scheduler.capacity..default-node-label-expression=large_disk
> * submit an application using rest api without "app-node-label-expression”, 
> "am-container-node-label-expression”
> * RM doesn’t allocate containers to the hosts associated with large_disk node 
> label



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-07-25 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101064#comment-16101064
 ] 

Rohith Sharma K S commented on YARN-6130:
-

Yes, it is implemented in YARN-5638. Compare code is there in 
AppCollectorData#happensBefore

> [ATSv2 Security] Generate a delegation token for AM when app collector is 
> created and pass it to AM via NM and RM
> -
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch, 
> YARN-6130-YARN-5355.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6802) Support view leaf queue am resource usage in RM web ui

2017-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101062#comment-16101062
 ] 

Hadoop QA commented on YARN-6802:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 2 new + 859 unchanged - 0 fixed = 861 total (was 859) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 54s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6802 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878932/YARN-6802.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fba7e9873f2a 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a92bf39 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/16549/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16549/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16549/testReport/ |
| modules | C: 

[jira] [Updated] (YARN-6866) Minor clean-up and fixes in anticipation of merge with trunk

2017-07-25 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6866:
-
Attachment: YARN-6866-YARN-2915.v4.patch

Fixing the test failure.

> Minor clean-up and fixes in anticipation of merge with trunk
> 
>
> Key: YARN-6866
> URL: https://issues.apache.org/jira/browse/YARN-6866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Subru Krishnan
>Assignee: Botong Huang
> Attachments: YARN-6866-YARN-2915.v1.patch, 
> YARN-6866-YARN-2915.v2.patch, YARN-6866-YARN-2915.v3.patch, 
> YARN-6866-YARN-2915.v4.patch
>
>
> We have done e2e testing of YARN Federation sucessfully and we have minor 
> cleans-up like pom version upgrade, redudant "." in configuration string, 
> documentation updates etc which we want to clean up before the merge to 
> trunk. This jira tracks the fixes we did as described above to ensure proper 
> e2e run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6866) Minor clean-up and fixes in anticipation of merge with trunk

2017-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101046#comment-16101046
 ] 

Hadoop QA commented on YARN-6866:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-2915 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
38s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
55s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} YARN-2915 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 46s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6866 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-6866) Minor clean-up and fixes in anticipation of merge with trunk

2017-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101039#comment-16101039
 ] 

Hadoop QA commented on YARN-6866:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-2915 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
53s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
54s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} YARN-2915 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 39s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6866 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-6712) Moving logging APIs over to slf4j in hadoop-yarn-server-common

2017-07-25 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101023#comment-16101023
 ] 

Yeliang Cang commented on YARN-6712:


Hi, [~ajisakaa], would you mind me taking this over?  I am interested in 
hadoop-yarn module. Can you split the jira or I split it by myself?

> Moving logging APIs over to slf4j in hadoop-yarn-server-common
> --
>
> Key: YARN-6712
> URL: https://issues.apache.org/jira/browse/YARN-6712
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-6712.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6855) CLI Proto Modifications to support Node Attributes

2017-07-25 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16101017#comment-16101017
 ] 

Naganarasimha G R commented on YARN-6855:
-

thanks for the quick review [~wangda], 
# agree
# would {{HostToAttribute}} better or {{NodeToAttribute}} better as mentioned 
in YARN-3409. 
# As the dom information is common in all 3 operations (i.e.replace/add/remove) 
i thought it was sufficient to have a single api and mention the "operation" as 
a method argument. thoughts?


> CLI Proto Modifications to support Node Attributes
> --
>
> Key: YARN-6855
> URL: https://issues.apache.org/jira/browse/YARN-6855
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6855-YARN-3409.001.patch, 
> YARN-6855-YARN-3409.002.patch, YARN-6855-YARN-3409.003.patch
>
>
> This jira focuses only on the proto modifications required for the CLI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6802) Support view leaf queue am resource usage in RM web ui

2017-07-25 Thread YunFan Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YunFan Zhou updated YARN-6802:
--
Attachment: (was: YARN-6802.002.patch)

> Support view leaf queue am resource usage in RM web ui
> --
>
> Key: YARN-6802
> URL: https://issues.apache.org/jira/browse/YARN-6802
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: YunFan Zhou
>Assignee: YunFan Zhou
> Fix For: 2.8.0
>
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> YARN-6802.001.patch, YARN-6802.002.patch
>
>
> RM Web ui should support view leaf queue am resource usage. 
> !screenshot-2.png!
> I will upload my patch later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6802) Support view leaf queue am resource usage in RM web ui

2017-07-25 Thread YunFan Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YunFan Zhou updated YARN-6802:
--
Attachment: YARN-6802.002.patch

> Support view leaf queue am resource usage in RM web ui
> --
>
> Key: YARN-6802
> URL: https://issues.apache.org/jira/browse/YARN-6802
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: YunFan Zhou
>Assignee: YunFan Zhou
> Fix For: 2.8.0
>
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> YARN-6802.001.patch, YARN-6802.002.patch
>
>
> RM Web ui should support view leaf queue am resource usage. 
> !screenshot-2.png!
> I will upload my patch later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6866) Minor clean-up and fixes in anticipation of merge with trunk

2017-07-25 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100995#comment-16100995
 ] 

Carlo Curino commented on YARN-6866:


Patches looks good to me. I will check that YARN-5412 doesn't have the same "." 
Issue. 

> Minor clean-up and fixes in anticipation of merge with trunk
> 
>
> Key: YARN-6866
> URL: https://issues.apache.org/jira/browse/YARN-6866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Subru Krishnan
>Assignee: Botong Huang
> Attachments: YARN-6866-YARN-2915.v1.patch, 
> YARN-6866-YARN-2915.v2.patch, YARN-6866-YARN-2915.v3.patch
>
>
> We have done e2e testing of YARN Federation sucessfully and we have minor 
> cleans-up like pom version upgrade, redudant "." in configuration string, 
> documentation updates etc which we want to clean up before the merge to 
> trunk. This jira tracks the fixes we did as described above to ensure proper 
> e2e run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3409) Support Node Attribute functionality

2017-07-25 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100991#comment-16100991
 ] 

Naganarasimha G R commented on YARN-3409:
-

bq.  node1:java=8\[string\],ssd=false\[boolean\]
thanks for clarifying  [~kkaranasos], IIUC you want basically {{"key=value"}} 
pairs. I also some what agree with your approach except that {{"[ ]"}} are 
generally used as optional parameters for a CLI command, so it would be little 
complicated to be captured in help. So can we think of something like 
{{node1:java=jdk8,ssd:boolean=false,ethernetcards:long=2}} where in after 
attribute name if type is not mentioned we can consider it as string? 

> Support Node Attribute functionality
> 
>
> Key: YARN-3409
> URL: https://issues.apache.org/jira/browse/YARN-3409
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api, capacityscheduler, client
>Reporter: Wangda Tan
>Assignee: Naganarasimha G R
> Attachments: 3409-apiChanges_v2.pdf (4).pdf, 
> Constraint-Node-Labels-Requirements-Design-doc_v1.pdf, YARN-3409.WIP.001.patch
>
>
> Specify only one label for each node (IAW, partition a cluster) is a way to 
> determinate how resources of a special set of nodes could be shared by a 
> group of entities (like teams, departments, etc.). Partitions of a cluster 
> has following characteristics:
> - Cluster divided to several disjoint sub clusters.
> - ACL/priority can apply on partition (Only market team / marke team has 
> priority to use the partition).
> - Percentage of capacities can apply on partition (Market team has 40% 
> minimum capacity and Dev team has 60% of minimum capacity of the partition).
> Attributes are orthogonal to partition, they’re describing features of node’s 
> hardware/software just for affinity. Some example of attributes:
> - glibc version
> - JDK version
> - Type of CPU (x86_64/i686)
> - Type of OS (windows, linux, etc.)
> With this, application can be able to ask for resource has (glibc.version >= 
> 2.20 && JDK.version >= 8u20 && x86_64).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6866) Minor clean-up and fixes in anticipation of merge with trunk

2017-07-25 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6866:
-
Attachment: YARN-6866-YARN-2915.v3.patch

Rebasing patch after branch force push

> Minor clean-up and fixes in anticipation of merge with trunk
> 
>
> Key: YARN-6866
> URL: https://issues.apache.org/jira/browse/YARN-6866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Subru Krishnan
>Assignee: Botong Huang
> Attachments: YARN-6866-YARN-2915.v1.patch, 
> YARN-6866-YARN-2915.v2.patch, YARN-6866-YARN-2915.v3.patch
>
>
> We have done e2e testing of YARN Federation sucessfully and we have minor 
> cleans-up like pom version upgrade, redudant "." in configuration string, 
> documentation updates etc which we want to clean up before the merge to 
> trunk. This jira tracks the fixes we did as described above to ensure proper 
> e2e run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6868) Add test scope to certain entries in hadoop-yarn-server-resourcemanager pom.xml

2017-07-25 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100967#comment-16100967
 ] 

Ray Chiang commented on YARN-6868:
--

Unit test failure looks to be the same as YARN-5548.

> Add test scope to certain entries in hadoop-yarn-server-resourcemanager 
> pom.xml
> ---
>
> Key: YARN-6868
> URL: https://issues.apache.org/jira/browse/YARN-6868
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: YARN-6868.001.patch
>
>
> The tag
> {noformat}
> test
> {noformat}
> is missing from a few entries in the pom.xml for 
> hadoop-yarn-server-resourcemanager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6866) Minor clean-up and fixes in anticipation of merge with trunk

2017-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100957#comment-16100957
 ] 

Hadoop QA commented on YARN-6866:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-2915 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
7s{color} | {color:red} root in YARN-2915 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m  
7s{color} | {color:red} root in YARN-2915 failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 7s{color} | {color:green} YARN-2915 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m  
5s{color} | {color:red} hadoop-yarn-api in YARN-2915 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m  
5s{color} | {color:red} hadoop-yarn-server-router in YARN-2915 failed. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m  
6s{color} | {color:red} hadoop-yarn-api in YARN-2915 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m  
4s{color} | {color:red} hadoop-yarn-server-router in YARN-2915 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
6s{color} | {color:red} hadoop-yarn-api in YARN-2915 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
4s{color} | {color:red} hadoop-yarn-server-router in YARN-2915 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
47s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
5s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} 
|
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m 54s{color} 
| {color:red} root generated 1358 new + 0 unchanged - 0 fixed = 1358 total (was 
0) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 48s{color} | {color:orange} root: The patch generated 208 new + 0 unchanged 
- 0 fixed = 208 total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} 
|
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | 

[jira] [Updated] (YARN-6866) Minor clean-up and fixes in anticipation of merge with trunk

2017-07-25 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-6866:
---
Attachment: YARN-6866-YARN-2915.v2.patch

> Minor clean-up and fixes in anticipation of merge with trunk
> 
>
> Key: YARN-6866
> URL: https://issues.apache.org/jira/browse/YARN-6866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Subru Krishnan
>Assignee: Botong Huang
> Attachments: YARN-6866-YARN-2915.v1.patch, 
> YARN-6866-YARN-2915.v2.patch
>
>
> We have done e2e testing of YARN Federation sucessfully and we have minor 
> cleans-up like pom version upgrade, redudant "." in configuration string, 
> documentation updates etc which we want to clean up before the merge to 
> trunk. This jira tracks the fixes we did as described above to ensure proper 
> e2e run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5548) Use MockRMMemoryStateStore to reduce test failures

2017-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100928#comment-16100928
 ] 

Hadoop QA commented on YARN-5548:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 18 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  8s{color} | {color:orange} root: The patch generated 7 new + 963 unchanged 
- 8 fixed = 970 total (was 971) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 43m  
7s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
53s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-5548 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878746/YARN-5548.0017.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 02f348c9ad46 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f81a4ef |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16544/artifact/patchprocess/diff-checkstyle-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16544/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16544/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   

[jira] [Commented] (YARN-6788) Improve performance of resource profile branch

2017-07-25 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100916#comment-16100916
 ] 

Wangda Tan commented on YARN-6788:
--

[~sunilg], not sure if findbugs is a real issue, could you check that?  

Beyond that, patch looks good.

> Improve performance of resource profile branch
> --
>
> Key: YARN-6788
> URL: https://issues.apache.org/jira/browse/YARN-6788
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-6788-YARN-3926.001.patch, 
> YARN-6788-YARN-3926.002.patch, YARN-6788-YARN-3926.003.patch, 
> YARN-6788-YARN-3926.004.patch, YARN-6788-YARN-3926.005.patch, 
> YARN-6788-YARN-3926.006.patch, YARN-6788-YARN-3926.007.patch, 
> YARN-6788-YARN-3926.008.patch, YARN-6788-YARN-3926.009.patch, 
> YARN-6788-YARN-3926.010.patch, YARN-6788-YARN-3926.011.patch, 
> YARN-6788-YARN-3926.012.patch, YARN-6788-YARN-3926.013.patch
>
>
> Currently we could see a 15% performance delta with this branch. 
> Few performance improvements to improve the same.
> Also this patch will handle 
> [comments|https://issues.apache.org/jira/browse/YARN-6761?focusedCommentId=16075418=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16075418]
>  from [~leftnoteasy].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6788) Improve performance of resource profile branch

2017-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100912#comment-16100912
 ] 

Hadoop QA commented on YARN-6788:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
29s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
2s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
14s{color} | {color:green} YARN-3926 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
0s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-3926 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 13 new + 125 unchanged - 17 fixed = 138 total (was 142) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 13s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
31s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 37s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The 

[jira] [Comment Edited] (YARN-6871) Add a new REST API GetAppReport call lighter and faster than the current one

2017-07-25 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100910#comment-16100910
 ] 

Subru Krishnan edited comment on YARN-6871 at 7/25/17 10:53 PM:


[~giovanni.fumarola], +1 on the proposal. Can we _extend_ the generic 
*deSelects* param introduced in YARN-6280 to achieve the same?

[~cltlfcjin]/[~sunilg], thoughts?


was (Author: subru):
[~giovanni.fumarola], +1 on the proposal. Can we use the generic *deSelects* 
param introduced in YARN-6280 to achieve the same?

[~cltlfcjin]/[~sunilg], thoughts?

> Add a new REST API GetAppReport call lighter and faster than the current one
> 
>
> Key: YARN-6871
> URL: https://issues.apache.org/jira/browse/YARN-6871
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager, router
>Reporter: Giovanni Matteo Fumarola
>
> This jira tracks the effort to create a new REST API similar to the current 
> GetAppReport but lighter and faster.
> With the current one we are facing a scalability issues.
> E.g. with ~500 applications running the AppReport can reach up to 300MB in 
> size due to the {{ResourceRequest}} in the {{AppInfo}}.
> The new method returns for each application the following and essential 
> information:
> * The application id;
> * The application name;
> * The application state according to the ResourceManager - valid values are 
> members of the YarnApplicationState enum: NEW, NEW_SAVING, SUBMITTED, 
> ACCEPTED, RUNNING, FINISHED, FAILED, KILLED;
> * The final status of the application if finished - reported by the 
> application itself - valid values are: UNDEFINED, SUCCEEDED, FAILED, KILLED;
> * The web URL that can be used to track the application;
> * Detailed diagnostics information;
> * The URL of the application master container logs;
> * The nodes http address of the application master;
> * The progress of the application as a percent.
> Yarn RM will return the new result faster and it will use less compute cycles 
> to create the report and it will improve the YARN RM and Client's 
> performances.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6866) Minor clean-up and fixes in anticipation of merge with trunk

2017-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100911#comment-16100911
 ] 

Hadoop QA commented on YARN-6866:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-2915 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
7s{color} | {color:red} root in YARN-2915 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m  
6s{color} | {color:red} root in YARN-2915 failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 6s{color} | {color:green} YARN-2915 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m  
6s{color} | {color:red} hadoop-yarn-api in YARN-2915 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m  
4s{color} | {color:red} hadoop-yarn-server-router in YARN-2915 failed. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m  
6s{color} | {color:red} hadoop-yarn-api in YARN-2915 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m  
4s{color} | {color:red} hadoop-yarn-server-router in YARN-2915 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
7s{color} | {color:red} hadoop-yarn-api in YARN-2915 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
4s{color} | {color:red} hadoop-yarn-server-router in YARN-2915 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
6s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
6s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
4s{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m  
6s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  6s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m  
6s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m  
4s{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m  
6s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m  
4s{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
6s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
5s{color} | {color:red} hadoop-yarn-server-router in the patch 

[jira] [Commented] (YARN-6871) Add a new REST API GetAppReport call lighter and faster than the current one

2017-07-25 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100910#comment-16100910
 ] 

Subru Krishnan commented on YARN-6871:
--

[~giovanni.fumarola], +1 on the proposal. Can we use the generic *deSelects* 
param introduced in YARN-6280 to achieve the same?

[~cltlfcjin]/[~sunilg], thoughts?

> Add a new REST API GetAppReport call lighter and faster than the current one
> 
>
> Key: YARN-6871
> URL: https://issues.apache.org/jira/browse/YARN-6871
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager, router
>Reporter: Giovanni Matteo Fumarola
>
> This jira tracks the effort to create a new REST API similar to the current 
> GetAppReport but lighter and faster.
> With the current one we are facing a scalability issues.
> E.g. with ~500 applications running the AppReport can reach up to 300MB in 
> size due to the {{ResourceRequest}} in the {{AppInfo}}.
> The new method returns for each application the following and essential 
> information:
> * The application id;
> * The application name;
> * The application state according to the ResourceManager - valid values are 
> members of the YarnApplicationState enum: NEW, NEW_SAVING, SUBMITTED, 
> ACCEPTED, RUNNING, FINISHED, FAILED, KILLED;
> * The final status of the application if finished - reported by the 
> application itself - valid values are: UNDEFINED, SUCCEEDED, FAILED, KILLED;
> * The web URL that can be used to track the application;
> * Detailed diagnostics information;
> * The URL of the application master container logs;
> * The nodes http address of the application master;
> * The progress of the application as a percent.
> Yarn RM will return the new result faster and it will use less compute cycles 
> to create the report and it will improve the YARN RM and Client's 
> performances.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6871) Add a new REST API GetAppReport call lighter and faster than the current one

2017-07-25 Thread Giovanni Matteo Fumarola (JIRA)
Giovanni Matteo Fumarola created YARN-6871:
--

 Summary: Add a new REST API GetAppReport call lighter and faster 
than the current one
 Key: YARN-6871
 URL: https://issues.apache.org/jira/browse/YARN-6871
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: resourcemanager, router
Reporter: Giovanni Matteo Fumarola


This jira tracks the effort to create a new REST API similar to the current 
GetAppReport but lighter and faster.
With the current one we are facing a scalability issues.
E.g. with ~500 applications running the AppReport can reach up to 300MB in size 
due to the {{ResourceRequest}} in the {{AppInfo}}.
The new method returns for each application the following and essential 
information:
* The application id;
* The application name;
* The application state according to the ResourceManager - valid values are 
members of the YarnApplicationState enum: NEW, NEW_SAVING, SUBMITTED, ACCEPTED, 
RUNNING, FINISHED, FAILED, KILLED;
* The final status of the application if finished - reported by the application 
itself - valid values are: UNDEFINED, SUCCEEDED, FAILED, KILLED;
* The web URL that can be used to track the application;
* Detailed diagnostics information;
* The URL of the application master container logs;
* The nodes http address of the application master;
* The progress of the application as a percent.

Yarn RM will return the new result faster and it will use less compute cycles 
to create the report and it will improve the YARN RM and Client's performances.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6866) Minor clean-up and fixes in anticipation of merge with trunk

2017-07-25 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-6866:
---
Attachment: YARN-6866-YARN-2915.v1.patch

> Minor clean-up and fixes in anticipation of merge with trunk
> 
>
> Key: YARN-6866
> URL: https://issues.apache.org/jira/browse/YARN-6866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Subru Krishnan
>Assignee: Botong Huang
> Attachments: YARN-6866-YARN-2915.v1.patch
>
>
> We have done e2e testing of YARN Federation sucessfully and we have minor 
> cleans-up like pom version upgrade, redudant "." in configuration string, 
> documentation updates etc which we want to clean up before the merge to 
> trunk. This jira tracks the fixes we did as described above to ensure proper 
> e2e run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6852) [YARN-6223] Native code changes to support isolate GPU devices by using CGroups

2017-07-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6852:
-
Attachment: YARN-6852.002.patch

Removed docker-related logics from the patch. (ver. 002). This patch depends on 
YARN-6033, so will submit patch once YARN-6033 get committed.

> [YARN-6223] Native code changes to support isolate GPU devices by using 
> CGroups
> ---
>
> Key: YARN-6852
> URL: https://issues.apache.org/jira/browse/YARN-6852
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6852.001.patch, YARN-6852.002.patch
>
>
> This JIRA plan to add support of:
> 1) Isolation in CGroups. (native side).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-07-25 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100865#comment-16100865
 ] 

Jian He commented on YARN-6130:
---

yeah, I think we can compare the token.
bq. this is used to handle race condition between 2 NM sending collector 
address to RM.
Is this already implemented ? I don't see the logic in code

> [ATSv2 Security] Generate a delegation token for AM when app collector is 
> created and pass it to AM via NM and RM
> -
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch, 
> YARN-6130-YARN-5355.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6033) Add support for sections in container-executor configuration file

2017-07-25 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100861#comment-16100861
 ] 

Wangda Tan commented on YARN-6033:
--

While doing compilation with the patch, I found it breaks build for gcc with 
following error message:
{code}
...hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c:84:75:
 error: expected ‘)’ before ‘PRId64’
[WARNING]  fprintf(ERRORFILE, "File %s must be owned by root, but is owned 
by %" PRId64 "\n",
{code}

Adding {{__STDC_FORMAT_MACROS}} before include {{inttypes.h}} in 
configuration.c solves the problem.

GCC version:

{code} 
/bin/cc -v
Using built-in specs.
COLLECT_GCC=/bin/cc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/lto-wrapper
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man 
--infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla 
--enable-bootstrap --enable-shared --enable-threads=posix 
--enable-checking=release --with-system-zlib --enable-__cxa_atexit 
--disable-libunwind-exceptions --enable-gnu-unique-object 
--enable-linker-build-id --with-linker-hash-style=gnu 
--enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin 
--enable-initfini-array --disable-libgcj 
--with-isl=/builddir/build/BUILD/gcc-4.8.5-20150702/obj-x86_64-redhat-linux/isl-install
 
--with-cloog=/builddir/build/BUILD/gcc-4.8.5-20150702/obj-x86_64-redhat-linux/cloog-install
 --enable-gnu-indirect-function --with-tune=generic --with-arch_32=x86-64 
--build=x86_64-redhat-linux
Thread model: posix
gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC)
{code} 

However, it works fine under clang:
{code}
cc -v
Apple LLVM version 8.1.0 (clang-802.0.42)
Target: x86_64-apple-darwin16.5.0
Thread model: posix
InstalledDir: 
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
{code} 

I'm not sure why this happens, it might be better to understand the problem 
before proceed. 

cc: [~vvasudev]/[~sunilg].

> Add support for sections in container-executor configuration file
> -
>
> Key: YARN-6033
> URL: https://issues.apache.org/jira/browse/YARN-6033
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-6033.003.patch, YARN-6033.004.patch, 
> YARN-6033-YARN-5673.001.patch, YARN-6033-YARN-5673.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6505) Define the strings used in SLS JSON input file format

2017-07-25 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100795#comment-16100795
 ] 

Yufei Gu commented on YARN-6505:


Thanks [~GergelyNovak] for working on this. Looks good to me generally. Some 
thoughts:
# Patch v1 doesn't apply to trunk anymore. Can you rebase it?
# YARN-6685 adds job count in to SLS JSON input format, can you add that as 
well?

> Define the strings used in SLS JSON input file format
> -
>
> Key: YARN-6505
> URL: https://issues.apache.org/jira/browse/YARN-6505
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Yufei Gu
>Assignee: Gergely Novák
>  Labels: newbie
> Attachments: YARN-6505.001.patch
>
>
> We could put them in a Java file like what YarnConfiguration does.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6870) ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a float, which is imprecise

2017-07-25 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6870:
--
Target Version/s: 2.9.0, 3.0.0-beta1

> ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a 
> float, which is imprecise
> ---
>
> Key: YARN-6870
> URL: https://issues.apache.org/jira/browse/YARN-6870
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, nodemanager
>Reporter: Brook Zhou
>Assignee: Brook Zhou
>
> We have seen issues on our clusters where the current way of computing CPU 
> usage is having float-arithmetic inaccuracies (the bug is still there in 
> trunk)
> Simple program to illustrate:
> {code:title=Bar.java|borderStyle=solid}
>   public static void main(String[] args) throws Exception {
> float result = 0.0f;
> for (int i = 0; i < 7; i++) {
>   if (i == 6) {
> result += (float) 4 / (float)18;
>   } else {
> result += (float) 2 / (float)18;
>   }
> }
> for (int i = 0; i < 7; i++) {
>   if (i == 6) {
> result -= (float) 4 / (float)18;
>   } else {
> result -= (float) 2 / (float)18;
>   } 
> }
> System.out.println(result);
>   }
> {code}
> // Printed
> 4.4703484E-8
> 2017-04-12 05:43:24,014 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Not enough cpu for [container_e3295_1491978508342_0467_01_30], Current 
> CPU Allocation: [0.891], Requested CPU Allocation: [0.]
> There are a few places with this issue:
> 1. ResourceUtilization.java - set/getCPU both use float. When 
> ContainerScheduler calls 
> ContainersMonitor.increase/decreaseResourceUtilization, this may lead to 
> issues.
> 2. AllocationBasedResourceUtilizationTracker.java  - hasResourcesAvailable 
> uses float as well for CPU computation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6870) ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a float, which is imprecise

2017-07-25 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100791#comment-16100791
 ] 

Arun Suresh commented on YARN-6870:
---

Thanks for raising this [~brookz]. Agreed, this can be a problem.. do you have 
a patch for this (preferably one for trunk and branch-2 as well) ?

> ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a 
> float, which is imprecise
> ---
>
> Key: YARN-6870
> URL: https://issues.apache.org/jira/browse/YARN-6870
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, nodemanager
>Reporter: Brook Zhou
>Assignee: Brook Zhou
>
> We have seen issues on our clusters where the current way of computing CPU 
> usage is having float-arithmetic inaccuracies (the bug is still there in 
> trunk)
> Simple program to illustrate:
> {code:title=Bar.java|borderStyle=solid}
>   public static void main(String[] args) throws Exception {
> float result = 0.0f;
> for (int i = 0; i < 7; i++) {
>   if (i == 6) {
> result += (float) 4 / (float)18;
>   } else {
> result += (float) 2 / (float)18;
>   }
> }
> for (int i = 0; i < 7; i++) {
>   if (i == 6) {
> result -= (float) 4 / (float)18;
>   } else {
> result -= (float) 2 / (float)18;
>   } 
> }
> System.out.println(result);
>   }
> {code}
> // Printed
> 4.4703484E-8
> 2017-04-12 05:43:24,014 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Not enough cpu for [container_e3295_1491978508342_0467_01_30], Current 
> CPU Allocation: [0.891], Requested CPU Allocation: [0.]
> There are a few places with this issue:
> 1. ResourceUtilization.java - set/getCPU both use float. When 
> ContainerScheduler calls 
> ContainersMonitor.increase/decreaseResourceUtilization, this may lead to 
> issues.
> 2. AllocationBasedResourceUtilizationTracker.java  - hasResourcesAvailable 
> uses float as well for CPU computation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object

2017-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100778#comment-16100778
 ] 

Hadoop QA commented on YARN-6593:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 61 new + 24 unchanged - 0 fixed = 85 total (was 24) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api 
generated 2 new + 123 unchanged - 0 fixed = 125 total (was 123) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
25s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
33s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6593 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878896/YARN-6593.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux d610b38f0c6b 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f81a4ef |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16543/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| whitespace | 

[jira] [Created] (YARN-6870) ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a float, which is imprecise

2017-07-25 Thread Brook Zhou (JIRA)
Brook Zhou created YARN-6870:


 Summary: ResourceUtilization/ContainersMonitorImpl is calculating 
CPU utilization as a float, which is imprecise
 Key: YARN-6870
 URL: https://issues.apache.org/jira/browse/YARN-6870
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api, nodemanager
Reporter: Brook Zhou
Assignee: Brook Zhou


We have seen issues on our clusters where the current way of computing CPU 
usage is having float-arithmetic inaccuracies (the bug is still there in trunk)

Simple program to illustrate:
{code:title=Bar.java|borderStyle=solid}
  public static void main(String[] args) throws Exception {
float result = 0.0f;
for (int i = 0; i < 7; i++) {
  if (i == 6) {
result += (float) 4 / (float)18;
  } else {
result += (float) 2 / (float)18;
  }
}
for (int i = 0; i < 7; i++) {
  if (i == 6) {
result -= (float) 4 / (float)18;
  } else {
result -= (float) 2 / (float)18;
  } 
}
System.out.println(result);
  }
{code}
// Printed
4.4703484E-8


2017-04-12 05:43:24,014 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
 Not enough cpu for [container_e3295_1491978508342_0467_01_30], Current CPU 
Allocation: [0.891], Requested CPU Allocation: [0.]

There are a few places with this issue:
1. ResourceUtilization.java - set/getCPU both use float. When 
ContainerScheduler calls 
ContainersMonitor.increase/decreaseResourceUtilization, this may lead to issues.

2. AllocationBasedResourceUtilizationTracker.java  - hasResourcesAvailable uses 
float as well for CPU computation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6031) Application recovery has failed when node label feature is turned off during RM recovery

2017-07-25 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100766#comment-16100766
 ] 

Jian He commented on YARN-6031:
---

bq. In RMAppManager#createAndPopulateNewRMApp, app is just created whether its 
in submission/recovery mode. Attempt is not yet created. Hence I think this 
wont be a problem.
The scenario is this: the RMApp is now transitioned to failed and the state is 
persisted in store, but attempt state is still null. If next time the admin 
re-enables node label, RMApp will be recovered as FAILED, but attempt state 
will be NULL.

bq.  Hence recovery for other apps will also continue and we will have context 
of this app as well.
Killing an app for a mistake of admin may be harsh from the perspective of 
service app, as all service containers will be killed. I was thinking whether 
we can let the app continue to run - existing containers will be running fine, 
the new requests with label will be rejected. I guess we can surface this as a 
diagnostics to the user ? 



> Application recovery has failed when node label feature is turned off during 
> RM recovery
> 
>
> Key: YARN-6031
> URL: https://issues.apache.org/jira/browse/YARN-6031
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 2.8.0
>Reporter: Ying Zhang
>Assignee: Ying Zhang
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: YARN-6031.001.patch, YARN-6031.002.patch, 
> YARN-6031.003.patch, YARN-6031.004.patch, YARN-6031.005.patch, 
> YARN-6031.006.patch, YARN-6031.007.patch, YARN-6031-branch-2.8.001.patch
>
>
> Here is the repro steps:
> Enable node label, restart RM, configure CS properly, and run some jobs;
> Disable node label, restart RM, and the following exception thrown:
> {noformat}
> Caused by: 
> org.apache.hadoop.yarn.exceptions.InvalidLabelResourceRequestException: 
> Invalid resource request, node label not enabled but request contains label 
> expression
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:225)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:248)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:394)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:339)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recoverApplication(RMAppManager.java:319)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recover(RMAppManager.java:436)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:1165)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:574)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> ... 10 more
> {noformat}
> During RM restart, application recovery failed due to that application had 
> node label expression specified while node label has been disabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6855) CLI Proto Modifications to support Node Attributes

2017-07-25 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100760#comment-16100760
 ] 

Wangda Tan commented on YARN-6855:
--

Thanks [~naganarasimha...@apache.org] for working on the patch, quickly scanned 
the patch, few comments:
1) Suggest to make all {{@Public}} apis to {{@Public/Unstable}} since this is 
new feature which is expected to be changed, we can upgrade its compatibility 
tag after a few releases. 
2) NodeIdToAttributes: suggest to update it to HostToAttribute as we discussed 
in YARN-3409.
3) Only one method is added in resourcemanager_administration_protocol.proto, 
is it enough?

> CLI Proto Modifications to support Node Attributes
> --
>
> Key: YARN-6855
> URL: https://issues.apache.org/jira/browse/YARN-6855
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6855-YARN-3409.001.patch, 
> YARN-6855-YARN-3409.002.patch, YARN-6855-YARN-3409.003.patch
>
>
> This jira focuses only on the proto modifications required for the CLI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6413) Decouple Yarn Registry API from ZK

2017-07-25 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100749#comment-16100749
 ] 

Jian He commented on YARN-6413:
---

bq. Right now yarn-native-services doesn't compile for me with a pom.xml error 
(pulled just now), is the branch healthy?
I just fixed this, can you pull the latest branch?
bq. I can add a select-multiple method with a filter of some sort. what do you 
mean by this ? as long as it doesn't change the functionality, I think it's 
fine. 


> Decouple Yarn Registry API from ZK
> --
>
> Key: YARN-6413
> URL: https://issues.apache.org/jira/browse/YARN-6413
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: amrmproxy, api, resourcemanager
>Reporter: Ellen Hui
>Assignee: Ellen Hui
> Attachments: 0001-Registry-API-v2.patch, 0002-Registry-API-v2.patch
>
>
> Right now the Yarn Registry API (defined in the RegistryOperations interface) 
> is a very thin layer over Zookeeper. This jira proposes changing the 
> interface to abstract away the implementation details so that we can write a 
> FS-based implementation of the registry service, which will be used to 
> support AMRMProxy HA.
> The new interface will use register/delete/resolve APIs instead of 
> Zookeeper-specific operations like mknode. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6726) Fix issues with docker commands executed by container-executor

2017-07-25 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100748#comment-16100748
 ] 

Wangda Tan commented on YARN-6726:
--

Thanks for explanations [~shaneku...@gmail.com], I still not fully 
understanding #2.

I can understand the part why command goes to STDOUT, but I'm not sure why 
removing fflush can avoid this happen. Before the patch, output of 
container-executor (in time order).
a. Using command:   (fflush)
b. Output of {{docker inspect}} 

Even if we remove fflush from (a), I'm not sure how we can avoid it occurs in 
the output since it happens before (b). 

Also, I'm wondering if this is a correct solution, because this assumes inside 
c-e, there's no message should be allowed to happen before the actual docker 
command execution. Ideally we should either redirect docker command output to a 
separate file or could possibly use named pipe.

> Fix issues with docker commands executed by container-executor
> --
>
> Key: YARN-6726
> URL: https://issues.apache.org/jira/browse/YARN-6726
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-6726.001.patch
>
>
> docker inspect, rm, stop, etc are issued through container-executor. Commands 
> other than docker run are not functioning properly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6413) Decouple Yarn Registry API from ZK

2017-07-25 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100749#comment-16100749
 ] 

Jian He edited comment on YARN-6413 at 7/25/17 8:44 PM:


bq. Right now yarn-native-services doesn't compile for me with a pom.xml error 
(pulled just now), is the branch healthy?
I just fixed this, can you pull the latest branch?
bq. I can add a select-multiple method with a filter of some sort.
 what do you mean by this ? as long as it doesn't change the functionality, I 
think it's fine. 



was (Author: jianhe):
bq. Right now yarn-native-services doesn't compile for me with a pom.xml error 
(pulled just now), is the branch healthy?
I just fixed this, can you pull the latest branch?
bq. I can add a select-multiple method with a filter of some sort. what do you 
mean by this ? as long as it doesn't change the functionality, I think it's 
fine. 


> Decouple Yarn Registry API from ZK
> --
>
> Key: YARN-6413
> URL: https://issues.apache.org/jira/browse/YARN-6413
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: amrmproxy, api, resourcemanager
>Reporter: Ellen Hui
>Assignee: Ellen Hui
> Attachments: 0001-Registry-API-v2.patch, 0002-Registry-API-v2.patch
>
>
> Right now the Yarn Registry API (defined in the RegistryOperations interface) 
> is a very thin layer over Zookeeper. This jira proposes changing the 
> interface to abstract away the implementation details so that we can write a 
> FS-based implementation of the registry service, which will be used to 
> support AMRMProxy HA.
> The new interface will use register/delete/resolve APIs instead of 
> Zookeeper-specific operations like mknode. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6307) Refactor FairShareComparator#compare

2017-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100732#comment-16100732
 ] 

Hudson commented on YARN-6307:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12054 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12054/])
YARN-6307. Refactor FairShareComparator#compare (Contributed by Yufei Gu 
(templedf: rev f81a4efb8c40f99a9a6b7b42d3b6eeedf43eb27a)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java


> Refactor FairShareComparator#compare
> 
>
> Key: YARN-6307
> URL: https://issues.apache.org/jira/browse/YARN-6307
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6307.001.patch, YARN-6307.002.patch, 
> YARN-6307.003.patch
>
>
> The method does three things: compare the min share usage, compare fair share 
> usage by checking weight ratio, break tied by submit time and name. They are 
> mixed with each other which is not easy to read and maintenance, poor style. 
> Additionally, there are potential performance issues, like no need to check 
> weight ratio if minShare usage comparison already indicate the order. It is 
> worth to improve considering huge amount invokings in scheduler.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6804) Allow custom hostname for docker containers in native services

2017-07-25 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100725#comment-16100725
 ] 

Jian He commented on YARN-6804:
---

The trunk patch is committed, thanks Billie !

> Allow custom hostname for docker containers in native services
> --
>
> Key: YARN-6804
> URL: https://issues.apache.org/jira/browse/YARN-6804
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services, 3.0.0-beta1
>
> Attachments: YARN-6804-trunk.004.patch, YARN-6804-trunk.005.patch, 
> YARN-6804-yarn-native-services.001.patch, 
> YARN-6804-yarn-native-services.002.patch, 
> YARN-6804-yarn-native-services.003.patch, 
> YARN-6804-yarn-native-services.004.patch, 
> YARN-6804-yarn-native-services.005.patch
>
>
> Instead of the default random docker container hostname, we could set a more 
> user-friendly hostname for the container. The default could be a hostname 
> based on the container ID, with an option for the AM to provide a different 
> hostname. In the case of the native services AM, we could provide the 
> hostname that would be created by the registry DNS server. Regardless of 
> whether or not registry DNS is enabled, this would be a more useful hostname 
> for the docker container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3409) Support Node Attribute functionality

2017-07-25 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100718#comment-16100718
 ] 

Wangda Tan commented on YARN-3409:
--

[~kkaranasos]/[~naganarasimha...@apache.org]. 

bq. For point 3 re: the data types, we can do something like this: 
node1:java=8[string],ssd=false[boolean]. But either way, it is not a big deal.
This proposal looks better to me to support typed attributes.

> Support Node Attribute functionality
> 
>
> Key: YARN-3409
> URL: https://issues.apache.org/jira/browse/YARN-3409
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api, capacityscheduler, client
>Reporter: Wangda Tan
>Assignee: Naganarasimha G R
> Attachments: 3409-apiChanges_v2.pdf (4).pdf, 
> Constraint-Node-Labels-Requirements-Design-doc_v1.pdf, YARN-3409.WIP.001.patch
>
>
> Specify only one label for each node (IAW, partition a cluster) is a way to 
> determinate how resources of a special set of nodes could be shared by a 
> group of entities (like teams, departments, etc.). Partitions of a cluster 
> has following characteristics:
> - Cluster divided to several disjoint sub clusters.
> - ACL/priority can apply on partition (Only market team / marke team has 
> priority to use the partition).
> - Percentage of capacities can apply on partition (Market team has 40% 
> minimum capacity and Dev team has 60% of minimum capacity of the partition).
> Attributes are orthogonal to partition, they’re describing features of node’s 
> hardware/software just for affinity. Some example of attributes:
> - glibc version
> - JDK version
> - Type of CPU (x86_64/i686)
> - Type of OS (windows, linux, etc.)
> With this, application can be able to ask for resource has (glibc.version >= 
> 2.20 && JDK.version >= 8u20 && x86_64).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6734) Ensure sub-application user is extracted & sent to timeline service

2017-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100684#comment-16100684
 ] 

Hadoop QA commented on YARN-6734:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
12s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
54s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 3 new + 
12 unchanged - 0 fixed = 15 total (was 12) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
12s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
26s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6734 |
| JIRA Patch URL | 

[jira] [Updated] (YARN-6593) [API] Introduce Placement Constraint object

2017-07-25 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-6593:
-
Attachment: YARN-6593.005.patch

Uploading new version of the patch.
I addressed the comments of [~leftnoteasy] and [~chris.douglas].
I added unit tests for creating constraints, constraint transformations, and 
conversions from/to protobufs.
I still need to add more javadocs -- will do so after I get Jenkin's output.

> [API] Introduce Placement Constraint object
> ---
>
> Key: YARN-6593
> URL: https://issues.apache.org/jira/browse/YARN-6593
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-6593.001.patch, YARN-6593.002.patch, 
> YARN-6593.003.patch, YARN-6593.004.patch, YARN-6593.005.patch
>
>
> Just removed Fixed version and moved it to target version as we set fix 
> version only after patch is committed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4161) Capacity Scheduler : Assign single or multiple containers per heart beat driven by configuration

2017-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100661#comment-16100661
 ] 

Hadoop QA commented on YARN-4161:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 327 unchanged - 2 fixed = 327 total (was 329) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 48s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-4161 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878833/YARN-4161.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bf6c3876a324 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ac9489f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16538/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16538/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16538/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Capacity Scheduler : Assign single or multiple containers per heart beat 
> driven by configuration
> 

[jira] [Commented] (YARN-6868) Add test scope to certain entries in hadoop-yarn-server-resourcemanager pom.xml

2017-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100660#comment-16100660
 ] 

Hadoop QA commented on YARN-6868:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 29s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6868 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878834/YARN-6868.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 91f8acd3bf45 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ac9489f |
| Default Java | 1.8.0_131 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16540/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16540/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16540/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add test scope to certain entries in hadoop-yarn-server-resourcemanager 
> pom.xml
> ---
>
> Key: YARN-6868
> URL: https://issues.apache.org/jira/browse/YARN-6868
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>

[jira] [Commented] (YARN-6802) Support view leaf queue am resource usage in RM web ui

2017-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100652#comment-16100652
 ] 

Hadoop QA commented on YARN-6802:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
17s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 2 new + 859 unchanged - 0 fixed = 861 total (was 859) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 42m 
46s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6802 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878815/YARN-6802.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2c8475f023cc 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ac9489f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/16539/artifact/patchprocess/whitespace-eol.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/16539/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16539/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16539/console |
| Powered by | Apache Yetus 

[jira] [Commented] (YARN-6804) Allow custom hostname for docker containers in native services

2017-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100594#comment-16100594
 ] 

Hudson commented on YARN-6804:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12053 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12053/])
YARN-6804. Allow custom hostname for docker containers in native (jianhe: rev 
ac9489f7fc2dd351fbe5be4b7a3add4782da81c3)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/Endpoint.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRunCommand.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/binding/RegistryPathUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/ServiceRecord.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* (edit) hadoop-client-modules/hadoop-client-minicluster/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestDockerContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c


> Allow custom hostname for docker containers in native services
> --
>
> Key: YARN-6804
> URL: https://issues.apache.org/jira/browse/YARN-6804
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-6804-trunk.004.patch, YARN-6804-trunk.005.patch, 
> YARN-6804-yarn-native-services.001.patch, 
> YARN-6804-yarn-native-services.002.patch, 
> YARN-6804-yarn-native-services.003.patch, 
> YARN-6804-yarn-native-services.004.patch, 
> YARN-6804-yarn-native-services.005.patch
>
>
> Instead of the default random docker container hostname, we could set a more 
> user-friendly hostname for the container. The default could be a hostname 
> based on the container ID, with an option for the AM to provide a different 
> hostname. In the case of the native services AM, we could provide the 
> hostname that would be created by the registry DNS server. Regardless of 
> whether or not registry DNS is enabled, this would be a more useful hostname 
> for the docker container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-07-25 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100587#comment-16100587
 ] 

Daniel Templeton commented on YARN-6610:


Yeah, the TreeSet was just there until YARN-6788 gets in.  Now that we're 
close, I'll put this one on hold.

> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-6610.001.patch
>
>
> The {{dominant}} param assumes there are only two resources, i.e. true means 
> to compare the dominant, and false means to compare the subordinate.  Now 
> that there are _n_ resources, this parameter no longer makes sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6468) Add Max AM Resource and AM Resource Usage to FairScheduler WebUI

2017-07-25 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu resolved YARN-6468.

Resolution: Duplicate

> Add Max AM Resource and AM Resource Usage to FairScheduler WebUI
> 
>
> Key: YARN-6468
> URL: https://issues.apache.org/jira/browse/YARN-6468
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> Max AM Resource indicates maximum resources AM can use in a queue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6869) Resources.lessThan() et al should not use ResourceCalculator.compare()

2017-07-25 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100548#comment-16100548
 ] 

Daniel Templeton commented on YARN-6869:


The other thing I'm seeing here is that many of the uses of these methods are 
broken when DRF is used, because they're using them to check limits.  If I set 
a limit of 8GB of memory, I don't care how many CPUs are requested, the limit 
of 8GB of memory should be enforced.  There are very few uses where dominance 
needs to be considered.  In those cases, they can just call {{compare()}} 
directly.  There are also a few uses of {{lessThan*()}} to look for 0 or 
negative resource values, which could be replaced with a method that explicitly 
does that check/correction.

> Resources.lessThan() et al should not use ResourceCalculator.compare()
> --
>
> Key: YARN-6869
> URL: https://issues.apache.org/jira/browse/YARN-6869
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>
> If you look at what the callers of {{lessThan()}}, {{lessThanOrEqual()}}, 
> {{greaterThan()}}, and {{greaterThanOrEqual()}} expect the calls to do, 
> there's no reason why the comparison should be done in context of the total 
> resources.  Skipping the division and doing a direct comparison should 
> improve performance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6869) Resources.lessThan() et al should not use ResourceCalculator.compare()

2017-07-25 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton reassigned YARN-6869:
--

Assignee: Daniel Templeton

> Resources.lessThan() et al should not use ResourceCalculator.compare()
> --
>
> Key: YARN-6869
> URL: https://issues.apache.org/jira/browse/YARN-6869
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>
> If you look at what the callers of {{lessThan()}}, {{lessThanOrEqual()}}, 
> {{greaterThan()}}, and {{greaterThanOrEqual()}} expect the calls to do, 
> there's no reason why the comparison should be done in context of the total 
> resources.  Skipping the division and doing a direct comparison should 
> improve performance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6869) Resources.lessThan() et al should not use ResourceCalculator.compare()

2017-07-25 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100526#comment-16100526
 ] 

Sunil G commented on YARN-6869:
---

Thanks [~templedf]
While doing preemption, we ran into this issue. But I was thinking this api is 
needed a compare kind of method to get to know if there is any dominance or not 
for any resource type. Indeed it can have impact on these apis. If its a 
vanilla checks like lessThan, greaterThan etc, we can have api like 
absoluteLessThan, absoluteGreaterThan. And if dominance is mandatory to handle 
in these apis, we can leave these as it is. I am not having much of context 
regarding how important to see dominant check is needed for these api, so 
please feel free me to correct me. 

> Resources.lessThan() et al should not use ResourceCalculator.compare()
> --
>
> Key: YARN-6869
> URL: https://issues.apache.org/jira/browse/YARN-6869
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>
> If you look at what the callers of {{lessThan()}}, {{lessThanOrEqual()}}, 
> {{greaterThan()}}, and {{greaterThanOrEqual()}} expect the calls to do, 
> there's no reason why the comparison should be done in context of the total 
> resources.  Skipping the division and doing a direct comparison should 
> improve performance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6869) Resources.lessThan() et al should not use ResourceCalculator.compare()

2017-07-25 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-6869:
--

 Summary: Resources.lessThan() et al should not use 
ResourceCalculator.compare()
 Key: YARN-6869
 URL: https://issues.apache.org/jira/browse/YARN-6869
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 3.0.0-alpha4
Reporter: Daniel Templeton


If you look at what the callers of {{lessThan()}}, {{lessThanOrEqual()}}, 
{{greaterThan()}}, and {{greaterThanOrEqual()}} expect the calls to do, there's 
no reason why the comparison should be done in context of the total resources.  
Skipping the division and doing a direct comparison should improve performance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3409) Support Node Attribute functionality

2017-07-25 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100427#comment-16100427
 ] 

Konstantinos Karanasos commented on YARN-3409:
--

Thanks for the answers, [~sunilg] and [~Naganarasimha].
I think we are in sync.

[~Naganarasimha], regarding the CLI for adding/updating attributes, I see point 
3 as a valid concern. Points 1 and 2 are less of an issue. We can still do 
multiple attributes at once, as in {{node1:java=8,ssd=false}}. For point 3 re: 
the data types, we can do something like this: 
{{node1:java=8\[string\],ssd=false\[boolean\]}}. But either way, it is not a 
big deal.

> Support Node Attribute functionality
> 
>
> Key: YARN-3409
> URL: https://issues.apache.org/jira/browse/YARN-3409
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api, capacityscheduler, client
>Reporter: Wangda Tan
>Assignee: Naganarasimha G R
> Attachments: 3409-apiChanges_v2.pdf (4).pdf, 
> Constraint-Node-Labels-Requirements-Design-doc_v1.pdf, YARN-3409.WIP.001.patch
>
>
> Specify only one label for each node (IAW, partition a cluster) is a way to 
> determinate how resources of a special set of nodes could be shared by a 
> group of entities (like teams, departments, etc.). Partitions of a cluster 
> has following characteristics:
> - Cluster divided to several disjoint sub clusters.
> - ACL/priority can apply on partition (Only market team / marke team has 
> priority to use the partition).
> - Percentage of capacities can apply on partition (Market team has 40% 
> minimum capacity and Dev team has 60% of minimum capacity of the partition).
> Attributes are orthogonal to partition, they’re describing features of node’s 
> hardware/software just for affinity. Some example of attributes:
> - glibc version
> - JDK version
> - Type of CPU (x86_64/i686)
> - Type of OS (windows, linux, etc.)
> With this, application can be able to ask for resource has (glibc.version >= 
> 2.20 && JDK.version >= 8u20 && x86_64).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6868) Add test scope to certain entries in hadoop-yarn-server-resourcemanager pom.xml

2017-07-25 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100396#comment-16100396
 ] 

Ray Chiang commented on YARN-6868:
--

[~haibo.chen] or [~sjlee0], can you let me know if the above is also no longer 
necessary?  I'm guessing it might be leftover from some test restructuring.

> Add test scope to certain entries in hadoop-yarn-server-resourcemanager 
> pom.xml
> ---
>
> Key: YARN-6868
> URL: https://issues.apache.org/jira/browse/YARN-6868
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: YARN-6868.001.patch
>
>
> The tag
> {noformat}
> test
> {noformat}
> is missing from a few entries in the pom.xml for 
> hadoop-yarn-server-resourcemanager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6868) Add test scope to certain entries in hadoop-yarn-server-resourcemanager pom.xml

2017-07-25 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100391#comment-16100391
 ] 

Ray Chiang commented on YARN-6868:
--

Oddly, I also see this entry as being unnecessary or at least I can't figure 
out how make this fail when I remove it.

{noformat}

  org.apache.hadoop
  hadoop-yarn-server-timelineservice
  test
  test-jar

{noformat}

> Add test scope to certain entries in hadoop-yarn-server-resourcemanager 
> pom.xml
> ---
>
> Key: YARN-6868
> URL: https://issues.apache.org/jira/browse/YARN-6868
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: YARN-6868.001.patch
>
>
> The tag
> {noformat}
> test
> {noformat}
> is missing from a few entries in the pom.xml for 
> hadoop-yarn-server-resourcemanager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6868) Add test scope to certain entries in hadoop-yarn-server-resourcemanager pom.xml

2017-07-25 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-6868:
-
Attachment: YARN-6868.001.patch

> Add test scope to certain entries in hadoop-yarn-server-resourcemanager 
> pom.xml
> ---
>
> Key: YARN-6868
> URL: https://issues.apache.org/jira/browse/YARN-6868
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: YARN-6868.001.patch
>
>
> The tag
> {noformat}
> test
> {noformat}
> is missing from a few entries in the pom.xml for 
> hadoop-yarn-server-resourcemanager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6868) Add test scope to certain entries in hadoop-yarn-server-resourcemanager pom.xml

2017-07-25 Thread Ray Chiang (JIRA)
Ray Chiang created YARN-6868:


 Summary: Add test scope to certain entries in 
hadoop-yarn-server-resourcemanager pom.xml
 Key: YARN-6868
 URL: https://issues.apache.org/jira/browse/YARN-6868
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Affects Versions: 3.0.0-beta1
Reporter: Ray Chiang
Assignee: Ray Chiang


The tag

{noformat}
test
{noformat}

is missing from a few entries in the pom.xml for 
hadoop-yarn-server-resourcemanager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4161) Capacity Scheduler : Assign single or multiple containers per heart beat driven by configuration

2017-07-25 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated YARN-4161:
--
Attachment: YARN-4161.006.patch

Good catch, [~sunilg]. Update a new patch to include that check.

> Capacity Scheduler : Assign single or multiple containers per heart beat 
> driven by configuration
> 
>
> Key: YARN-4161
> URL: https://issues.apache.org/jira/browse/YARN-4161
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Mayank Bansal
>Assignee: Mayank Bansal
>  Labels: oct16-medium
> Attachments: YARN-4161.002.patch, YARN-4161.003.patch, 
> YARN-4161.004.patch, YARN-4161.005.patch, YARN-4161.006.patch, 
> YARN-4161.patch, YARN-4161.patch.1
>
>
> Capacity Scheduler right now schedules multiple containers per heart beat if 
> there are more resources available in the node.
> This approach works fine however in some cases its not distribute the load 
> across the cluster hence throughput of the cluster suffers. I am adding 
> feature to drive that using configuration by that we can control the number 
> of containers assigned per heart beat.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6867) AbstractYarnScheduler reports the configured maximum resources, instead of the actual, even after the configured waittime

2017-07-25 Thread Muhammad Samir Khan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100280#comment-16100280
 ] 

Muhammad Samir Khan commented on YARN-6867:
---

The patch in YARN-4719 solves the problem.

> AbstractYarnScheduler reports the configured maximum resources, instead of 
> the actual, even after the configured waittime
> -
>
> Key: YARN-6867
> URL: https://issues.apache.org/jira/browse/YARN-6867
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Reporter: Muhammad Samir Khan
>Assignee: Nathan Roberts
> Attachments: YARN-6867.001.patch
>
>
> AbstractYarnScheduler has a configured wait time during which it reports the 
> maximum resources from the configuration instead of the actual resources 
> available in the cluster. However, the first query after the wait time 
> expiration is responded by the configured maximum resources instead of the 
> actual maximum resources. This can result in a app submission to fail with an 
> InvalidResourceRequestException (will attach a unit test in the patch) since 
> the maximum resources reported by the RM is different than the one it sanity 
> checks against at app submission.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6867) AbstractYarnScheduler reports the configured maximum resources, instead of the actual, even after the configured waittime

2017-07-25 Thread Muhammad Samir Khan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100276#comment-16100276
 ] 

Muhammad Samir Khan commented on YARN-6867:
---

Sorry, it looks like the problem was solved in trunk via YARN-4719. I should 
have checked on trunk before proceeding. Closing the JIRA now.

> AbstractYarnScheduler reports the configured maximum resources, instead of 
> the actual, even after the configured waittime
> -
>
> Key: YARN-6867
> URL: https://issues.apache.org/jira/browse/YARN-6867
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Reporter: Muhammad Samir Khan
>Assignee: Nathan Roberts
> Attachments: YARN-6867.001.patch
>
>
> AbstractYarnScheduler has a configured wait time during which it reports the 
> maximum resources from the configuration instead of the actual resources 
> available in the cluster. However, the first query after the wait time 
> expiration is responded by the configured maximum resources instead of the 
> actual maximum resources. This can result in a app submission to fail with an 
> InvalidResourceRequestException (will attach a unit test in the patch) since 
> the maximum resources reported by the RM is different than the one it sanity 
> checks against at app submission.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6867) AbstractYarnScheduler reports the configured maximum resources, instead of the actual, even after the configured waittime

2017-07-25 Thread Muhammad Samir Khan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100208#comment-16100208
 ] 

Muhammad Samir Khan commented on YARN-6867:
---

Requesting [~rkanter] and [~kasha] for comments.

> AbstractYarnScheduler reports the configured maximum resources, instead of 
> the actual, even after the configured waittime
> -
>
> Key: YARN-6867
> URL: https://issues.apache.org/jira/browse/YARN-6867
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Reporter: Muhammad Samir Khan
>Assignee: Nathan Roberts
> Attachments: YARN-6867.001.patch
>
>
> AbstractYarnScheduler has a configured wait time during which it reports the 
> maximum resources from the configuration instead of the actual resources 
> available in the cluster. However, the first query after the wait time 
> expiration is responded by the configured maximum resources instead of the 
> actual maximum resources. This can result in a app submission to fail with an 
> InvalidResourceRequestException (will attach a unit test in the patch) since 
> the maximum resources reported by the RM is different than the one it sanity 
> checks against at app submission.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6867) AbstractYarnScheduler reports the configured maximum resources, instead of the actual, even after the configured waittime

2017-07-25 Thread Nathan Roberts (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Roberts reassigned YARN-6867:


Assignee: Nathan Roberts

> AbstractYarnScheduler reports the configured maximum resources, instead of 
> the actual, even after the configured waittime
> -
>
> Key: YARN-6867
> URL: https://issues.apache.org/jira/browse/YARN-6867
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Reporter: Muhammad Samir Khan
>Assignee: Nathan Roberts
> Attachments: YARN-6867.001.patch
>
>
> AbstractYarnScheduler has a configured wait time during which it reports the 
> maximum resources from the configuration instead of the actual resources 
> available in the cluster. However, the first query after the wait time 
> expiration is responded by the configured maximum resources instead of the 
> actual maximum resources. This can result in a app submission to fail with an 
> InvalidResourceRequestException (will attach a unit test in the patch) since 
> the maximum resources reported by the RM is different than the one it sanity 
> checks against at app submission.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6867) AbstractYarnScheduler reports the configured maximum resources, instead of the actual, even after the configured waittime

2017-07-25 Thread Muhammad Samir Khan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100193#comment-16100193
 ] 

Muhammad Samir Khan commented on YARN-6867:
---

If the application gets submitted before the wait time has expired, then the 
sanity check for resources will pass and the app submission will go through. 
However, if the requested resources is more than available in the cluster, then 
the app will "hang" forever waiting for the AM container to be allocated. I 
think YARN-56 captures this issue.

> AbstractYarnScheduler reports the configured maximum resources, instead of 
> the actual, even after the configured waittime
> -
>
> Key: YARN-6867
> URL: https://issues.apache.org/jira/browse/YARN-6867
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Reporter: Muhammad Samir Khan
> Attachments: YARN-6867.001.patch
>
>
> AbstractYarnScheduler has a configured wait time during which it reports the 
> maximum resources from the configuration instead of the actual resources 
> available in the cluster. However, the first query after the wait time 
> expiration is responded by the configured maximum resources instead of the 
> actual maximum resources. This can result in a app submission to fail with an 
> InvalidResourceRequestException (will attach a unit test in the patch) since 
> the maximum resources reported by the RM is different than the one it sanity 
> checks against at app submission.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6867) AbstractYarnScheduler reports the configured maximum resources, instead of the actual, even after the configured waittime

2017-07-25 Thread Muhammad Samir Khan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Samir Khan updated YARN-6867:
--
Attachment: YARN-6867.001.patch

Added a new unit test in TestRM to demonstrate how an app submission can fail 
with a InvalidResourceRequestException being thrown. Changes to 
AbstractYarnScheduler to report the correct value after the wait time is over. 
Also added a unit test to TestAbstractYarnScheduler.

This does not handle all the corner cases, e.g. the RM reported the max values 
before the wait time was over but the app was submitted after the wait time had 
expired. But this should handle the more reproducible one.

> AbstractYarnScheduler reports the configured maximum resources, instead of 
> the actual, even after the configured waittime
> -
>
> Key: YARN-6867
> URL: https://issues.apache.org/jira/browse/YARN-6867
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Reporter: Muhammad Samir Khan
> Attachments: YARN-6867.001.patch
>
>
> AbstractYarnScheduler has a configured wait time during which it reports the 
> maximum resources from the configuration instead of the actual resources 
> available in the cluster. However, the first query after the wait time 
> expiration is responded by the configured maximum resources instead of the 
> actual maximum resources. This can result in a app submission to fail with an 
> InvalidResourceRequestException (will attach a unit test in the patch) since 
> the maximum resources reported by the RM is different than the one it sanity 
> checks against at app submission.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6802) Support view leaf queue am resource usage in RM web ui

2017-07-25 Thread YunFan Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YunFan Zhou updated YARN-6802:
--
Attachment: YARN-6802.002.patch

> Support view leaf queue am resource usage in RM web ui
> --
>
> Key: YARN-6802
> URL: https://issues.apache.org/jira/browse/YARN-6802
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: YunFan Zhou
>Assignee: YunFan Zhou
> Fix For: 2.8.0
>
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> YARN-6802.001.patch, YARN-6802.002.patch
>
>
> RM Web ui should support view leaf queue am resource usage. 
> !screenshot-2.png!
> I will upload my patch later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6867) AbstractYarnScheduler reports the configured maximum resources, instead of the actual, even after the configured waittime

2017-07-25 Thread Muhammad Samir Khan (JIRA)
Muhammad Samir Khan created YARN-6867:
-

 Summary: AbstractYarnScheduler reports the configured maximum 
resources, instead of the actual, even after the configured waittime
 Key: YARN-6867
 URL: https://issues.apache.org/jira/browse/YARN-6867
 Project: Hadoop YARN
  Issue Type: Bug
  Components: scheduler
Reporter: Muhammad Samir Khan


AbstractYarnScheduler has a configured wait time during which it reports the 
maximum resources from the configuration instead of the actual resources 
available in the cluster. However, the first query after the wait time 
expiration is responded by the configured maximum resources instead of the 
actual maximum resources. This can result in a app submission to fail with an 
InvalidResourceRequestException (will attach a unit test in the patch) since 
the maximum resources reported by the RM is different than the one it sanity 
checks against at app submission.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5892) Support user-specific minimum user limit percentage in Capacity Scheduler

2017-07-25 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100167#comment-16100167
 ] 

Eric Payne commented on YARN-5892:
--

Thanks [~sunilg] for the review.
{quote}
In ActiveUsersManager, could we avoid activeUsersChanged if possible. May be we 
could keep an active set in ActiveUsersManager itself and we could clear this 
set when activate/deactivateApplication is invoked.
{quote}
In {{ActiveUsersManager}}, the keyset of {{usersApplications}} already holds a 
list of active users, but it's not enough to know whether or not there are 
users in this keyset. {{LeafQueue}} also needs to know when a users is added to 
or removed from this set. This is so that {{LeafQueue#computUserLimit}} is only 
recomputing the sum of the active user's weights after a user goes active or 
inactive, not every time through the code.

{{LeafQueue}} could possibly keep its own copy of 
{{ActiveUsersManager#getNumActiveUsers}} and then compare that with 
{{getNumActiveUsers}} when it needs to know if it must recompute the sum of 
active user weights. However, that seems more complicated and error prone than 
using {{activeUsersChanged}}.


> Support user-specific minimum user limit percentage in Capacity Scheduler
> -
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 3.0.0-alpha3
>
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, 
> YARN-5892.008.patch, YARN-5892.009.patch, YARN-5892.010.patch, 
> YARN-5892.012.patch, YARN-5892.013.patch, YARN-5892.014.patch, 
> YARN-5892.015.patch, YARN-5892.branch-2.015.patch, 
> YARN-5892.branch-2.016.patch, YARN-5892.branch-2.8.016.patch, 
> YARN-5892.branch-2.8.017.patch, YARN-5892.branch-2.8.018.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6726) Fix issues with docker commands executed by container-executor

2017-07-25 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16100062#comment-16100062
 ] 

Shane Kumpf commented on YARN-6726:
---

Thanks for the review, [~wangda].

I will add additional validation of the container name.

Regarding #2, let me try to explain with an example.

We run {{docker inspect}} via 
{{PrivilegedOperationExecutor#executePrivilegedOperation}} and one of the 
options is to "grabOutput" from the command executed by c-e. In the {{docker 
inspect}} case, this is the IP address and hostname of the container. LOGFILE 
is set to stdout, which is where {{docker inspect}} output is "grabbed" from. 
When the fflush occurs on the LOGFILE, "grabOutput" returns not only the output 
from {{docker inspect}} but also this log entry. Note in the output below how 
the "IP" entry for the container is the log entry and not the IP address. Let 
me know if this is still unclear.

{code}
2017-07-25 13:30:03,596 DEBUG 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out 1 
container statuses: [ContainerStatus: [ContainerId: 
container_1500987852094_0001_01_01, ExecutionType: GUARANTEED, State: 
RUNNING, Capability: , Diagnostics: , ExitStatus: -1000, 
IP: [ Using command inspect 
--format='{{range(.NetworkSettings.Networks)}}{{.IPAddress}}, 
{{end}}{{.Config.Hostname}}' container_1500987852094_0001_01_01], Host: 
localhost.localdomain]]
{code}

> Fix issues with docker commands executed by container-executor
> --
>
> Key: YARN-6726
> URL: https://issues.apache.org/jira/browse/YARN-6726
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-6726.001.patch
>
>
> docker inspect, rm, stop, etc are issued through container-executor. Commands 
> other than docker run are not functioning properly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6734) Ensure sub-application user is extracted & sent to timeline service

2017-07-25 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099872#comment-16099872
 ] 

Rohith Sharma K S commented on YARN-6734:
-

bq. Can we refactor store*ToSubApplicationTable methods. They are quite similar 
to storeInfo, storeMetrics, storeConfigs, etc. methods.
Looking into code, it appears to be refactoring should be done elaborately i.e 
from ColumnFamily, Column. I am fine to refactor all together in this JIRA 
itself but it would be a very large code change.  On flip side, we can take it 
up separately where we can do granular review. I am fine with either ways. 

> Ensure sub-application user is extracted & sent to timeline service
> ---
>
> Key: YARN-6734
> URL: https://issues.apache.org/jira/browse/YARN-6734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Rohith Sharma K S
> Attachments: YARN-6734-YARN-5355.001.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information. YARN-6733
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> YARN-6733 tracks the code changes needed for  table schema creation.
> This jira tracks writing to that table, updating the user name fields to 
> include sub-application user etc. This would mean adding a field to Flow 
> Context which can store an additional user 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6850) Ensure that supplemented timestamp is stored only for flow run metrics

2017-07-25 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099810#comment-16099810
 ] 

Varun Saxena commented on YARN-6850:


Thanks [~vrushalic] for review and commit. Thanks [~jrottinghuis] and 
[~rohithsharma] for reviews.

> Ensure that supplemented timestamp is stored only for flow run metrics
> --
>
> Key: YARN-6850
> URL: https://issues.apache.org/jira/browse/YARN-6850
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-6850-YARN-5355.01.patch
>
>
> In timeline service v2,  ColumnHelper#getPutTimestamp supplements the 
> timestamp and is called by ColumnHelper#store. This is not conditional and 
> called for every put.
> We need to ensure that the cell timestamps for metrics in entity and 
> application (and sub application) tables are "correct" timestamps since we 
> will be enabling TTLs for these cells. 
> The supplemented timestamp is to be used only in the flow run table by the 
> coprocessor which intercepts all reads & writes to cells in this table. It 
> looks at the supplemented timestamp to figure out which app id this 
> particular cell belongs to. This is done in order to ensure no collision 
> occurs when two apps belonging to same flow run write the same metric at the 
> same timestamp. 
> Discovered in the discussion in YARN-4455 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6133) [ATSv2 Security] Renew delegation token for app automatically if an app collector is active

2017-07-25 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-6133:
---
Summary: [ATSv2 Security] Renew delegation token for app automatically if 
an app collector is active  (was: [Security] Renew delegation token for app 
automatically if an app collector is active)

> [ATSv2 Security] Renew delegation token for app automatically if an app 
> collector is active
> ---
>
> Key: YARN-6133
> URL: https://issues.apache.org/jira/browse/YARN-6133
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6733) Add table for storing sub-application entities

2017-07-25 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099801#comment-16099801
 ] 

Rohith Sharma K S commented on YARN-6733:
-

committing shortly

> Add table for storing sub-application entities
> --
>
> Key: YARN-6733
> URL: https://issues.apache.org/jira/browse/YARN-6733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: IMG_7040.JPG, YARN-6733-YARN-5355.001.patch, 
> YARN-6733-YARN-5355.002.patch, YARN-6733-YARN-5355.003.patch, 
> YARN-6733-YARN-5355.004.patch, YARN-6733-YARN-5355.005.patch, 
> YARN-6733-YARN-5355.006.patch, YARN-6733-YARN-5355.007.patch, 
> YARN-6733-YARN-5355.008.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information.
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> This jira tracks the code changes needed for  table schema creation.
> I will file other jiras for writing to that table, updating the user name 
> fields to include sub-application user etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6788) Improve performance of resource profile branch

2017-07-25 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6788:
--
Attachment: YARN-6788-YARN-3926.013.patch

updating patch.

> Improve performance of resource profile branch
> --
>
> Key: YARN-6788
> URL: https://issues.apache.org/jira/browse/YARN-6788
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-6788-YARN-3926.001.patch, 
> YARN-6788-YARN-3926.002.patch, YARN-6788-YARN-3926.003.patch, 
> YARN-6788-YARN-3926.004.patch, YARN-6788-YARN-3926.005.patch, 
> YARN-6788-YARN-3926.006.patch, YARN-6788-YARN-3926.007.patch, 
> YARN-6788-YARN-3926.008.patch, YARN-6788-YARN-3926.009.patch, 
> YARN-6788-YARN-3926.010.patch, YARN-6788-YARN-3926.011.patch, 
> YARN-6788-YARN-3926.012.patch, YARN-6788-YARN-3926.013.patch
>
>
> Currently we could see a 15% performance delta with this branch. 
> Few performance improvements to improve the same.
> Also this patch will handle 
> [comments|https://issues.apache.org/jira/browse/YARN-6761?focusedCommentId=16075418=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16075418]
>  from [~leftnoteasy].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6734) Ensure sub-application user is extracted & sent to timeline service

2017-07-25 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099745#comment-16099745
 ] 

Varun Saxena commented on YARN-6734:


[~rohithsharma], can we add a test to verify write into subapplication table?
No further comments on the patch as it stands right now, other than this.

> Ensure sub-application user is extracted & sent to timeline service
> ---
>
> Key: YARN-6734
> URL: https://issues.apache.org/jira/browse/YARN-6734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Rohith Sharma K S
> Attachments: YARN-6734-YARN-5355.001.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information. YARN-6733
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> YARN-6733 tracks the code changes needed for  table schema creation.
> This jira tracks writing to that table, updating the user name fields to 
> include sub-application user etc. This would mean adding a field to Flow 
> Context which can store an additional user 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6150) TestContainerManagerSecurity tests for Yarn Server are flakey

2017-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099731#comment-16099731
 ] 

Hudson commented on YARN-6150:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12052 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12052/])
YARN-6150. TestContainerManagerSecurity tests for Yarn Server are (aajisaka: 
rev 218b1b33ffe83cf2e330a2aa90685d0c14547a3d)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java


> TestContainerManagerSecurity tests for Yarn Server are flakey
> -
>
> Key: YARN-6150
> URL: https://issues.apache.org/jira/browse/YARN-6150
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Daniel Sturman
>Assignee: Daniel Sturman
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6150.001.patch, YARN-6150.002.patch, 
> YARN-6150.003.patch, YARN-6150.004.patch, YARN-6150.005.patch, 
> YARN-6150.006.patch, YARN-6150.007.patch
>
>
> Repeated runs of 
> {{org.apache.hadoop.yarn.server.TestContainerManagedSecurity}} can either 
> pass or fail on repeated runs on the same codebase.  Also, the two runs (one 
> in secure mode, one without security) aren't well labeled in JUnit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6240) TestCapacityScheduler.testRefreshQueuesWithQueueDelete fails randomly

2017-07-25 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099615#comment-16099615
 ] 

Naganarasimha G R commented on YARN-6240:
-

Hi [~rohithsharma], 
As discussed offline, it requires additional information as its a intermittent 
failure and had uploaded a patch for the additional logs in this jira and as 
well as in YARN-6741. Hope if either one gets committed will be able to analyze 
further and make some progress.

> TestCapacityScheduler.testRefreshQueuesWithQueueDelete fails randomly
> -
>
> Key: YARN-6240
> URL: https://issues.apache.org/jira/browse/YARN-6240
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: test
>Reporter: Sunil G
>Assignee: Naganarasimha G R
> Attachments: YARN-6240.001.patch
>
>
> *Error Message*
> Expected to NOT throw exception when refresh queue tries to delete a queue 
> WITHOUT running apps
> Link 
> [here|https://builds.apache.org/job/PreCommit-YARN-Build/15092/testReport/org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity/TestCapacityScheduler/testRefreshQueuesWithQueueDelete/]
> *Stacktrace*
> {code}
> java.lang.AssertionError: Expected to NOT throw exception when refresh queue 
> tries to delete a queue WITHOUT running apps
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler.testRefreshQueuesWithQueueDelete(TestCapacityScheduler.java:3875)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6678) Committer thread crashes with IllegalStateException in async-scheduling mode of CapacityScheduler

2017-07-25 Thread Tao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-6678:
---
Attachment: YARN-6678.branch-2.005.patch

Attached branch-2 patch for cleanly applying. Thanks [~sunilg] and 
[~leftnoteasy] for commits and reviews.

> Committer thread crashes with IllegalStateException in async-scheduling mode 
> of CapacityScheduler
> -
>
> Key: YARN-6678
> URL: https://issues.apache.org/jira/browse/YARN-6678
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-6678.001.patch, YARN-6678.002.patch, 
> YARN-6678.003.patch, YARN-6678.004.patch, YARN-6678.005.patch, 
> YARN-6678.branch-2.005.patch
>
>
> Error log:
> {noformat}
> java.lang.IllegalStateException: Trying to reserve container 
> container_e10_1495599791406_7129_01_001453 for application 
> appattempt_1495599791406_7129_01 when currently reserved container 
> container_e10_1495599791406_7123_01_001513 on node host: node0123:45454 
> #containers=40 available=... used=...
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode.reserveResource(FiCaSchedulerNode.java:81)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.reserve(FiCaSchedulerApp.java:1079)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:795)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2770)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler$ResourceCommitterService.run(CapacityScheduler.java:546)
> {noformat}
> Reproduce this problem:
> 1. nm1 re-reserved app-1/container-X1 and generated reserve proposal-1
> 2. nm2 had enough resource for app-1, un-reserved app-1/container-X1 and 
> allocated app-1/container-X2
> 3. nm1 reserved app-2/container-Y
> 4. proposal-1 was accepted but throw IllegalStateException when applying
> Currently the check code for reserve proposal in FiCaSchedulerApp#accept as 
> follows:
> {code}
>   // Container reserved first time will be NEW, after the container
>   // accepted & confirmed, it will become RESERVED state
>   if (schedulerContainer.getRmContainer().getState()
>   == RMContainerState.RESERVED) {
> // Set reReservation == true
> reReservation = true;
>   } else {
> // When reserve a resource (state == NEW is for new container,
> // state == RUNNING is for increase container).
> // Just check if the node is not already reserved by someone
> if (schedulerContainer.getSchedulerNode().getReservedContainer()
> != null) {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("Try to reserve a container, but the node is "
> + "already reserved by another container="
> + schedulerContainer.getSchedulerNode()
> .getReservedContainer().getContainerId());
>   }
>   return false;
> }
>   }
> {code}
> The reserved container on the node of reserve proposal will be checked only 
> for first-reserve container.
> We should confirm that reserved container on this node is equal to re-reserve 
> container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6031) Application recovery has failed when node label feature is turned off during RM recovery

2017-07-25 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099594#comment-16099594
 ] 

Sunil G commented on YARN-6031:
---

Hi [~jianhe]
Few doubts here,

bq.1. Below code catches InvalidLabelResourceRequestException and assumes that 
the error is because node-label becomes disabled
This code snippet catches InvalidLabelResourceRequestException and suppress the 
same only in case of recovery. If AMResourceRequest was stored in statestore, 
which means that {{validateAndCreateResourceRequest}} was successful when app 
was submitted. Now during recovery, same will throw error only when node labels 
are disables by conf. If its in store, we can assume that the am request is 
sane enough. Could you please give more context where some other scenario can 
also throw same exception during recovery.
On an another note, if not recovery {{throw e;}}, we throw same exception back.

bq.2. Below code directly transitions app to failed by using a Rejected event. 
The attempt state is not moved to failed
In RMAppManager#createAndPopulateNewRMApp, app is just created whether its in 
submission/recovery mode. Attempt is not yet created. Hence I think this wont 
be a problem.

bq.3. Is it ok to let the app continue in this scenario, it's less disruptive 
to the apps.
Currently exception was thrown and RM was loosing the context of such an app. 
To record and track such an app, we create the app nd move it to fail state. 
Hence recovery for other apps will also continue and we will have context of 
this app as well.

> Application recovery has failed when node label feature is turned off during 
> RM recovery
> 
>
> Key: YARN-6031
> URL: https://issues.apache.org/jira/browse/YARN-6031
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 2.8.0
>Reporter: Ying Zhang
>Assignee: Ying Zhang
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: YARN-6031.001.patch, YARN-6031.002.patch, 
> YARN-6031.003.patch, YARN-6031.004.patch, YARN-6031.005.patch, 
> YARN-6031.006.patch, YARN-6031.007.patch, YARN-6031-branch-2.8.001.patch
>
>
> Here is the repro steps:
> Enable node label, restart RM, configure CS properly, and run some jobs;
> Disable node label, restart RM, and the following exception thrown:
> {noformat}
> Caused by: 
> org.apache.hadoop.yarn.exceptions.InvalidLabelResourceRequestException: 
> Invalid resource request, node label not enabled but request contains label 
> expression
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:225)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:248)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:394)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:339)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recoverApplication(RMAppManager.java:319)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recover(RMAppManager.java:436)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:1165)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:574)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> ... 10 more
> {noformat}
> During RM restart, application recovery failed due to that application had 
> node label expression specified while node label has been disabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6102) RMActiveService context to be updated with new RMContext on failover

2017-07-25 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099580#comment-16099580
 ] 

Rohith Sharma K S commented on YARN-6102:
-

thanks [~sunilg] [~jianhe] [~Naganarasimha] [~ajithshetty] for the reviews. 

> RMActiveService context to be updated with new RMContext on failover
> 
>
> Key: YARN-6102
> URL: https://issues.apache.org/jira/browse/YARN-6102
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Ajith S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: eventOrder.JPG, YARN-6102.01.patch, YARN-6102.02.patch, 
> YARN-6102.03.patch, YARN-6102.04.patch, YARN-6102.05.patch, 
> YARN-6102.06.patch, YARN-6102.07.patch, YARN-6102-branch-2.001.patch, 
> YARN-6102-branch-2.002.patch
>
>
> {code}2017-01-17 16:42:17,911 FATAL [AsyncDispatcher event handler] 
> event.AsyncDispatcher (AsyncDispatcher.java:dispatch(200)) - Error in 
> dispatcher thread
> java.lang.Exception: No handler for registered for class 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeEventType
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:196)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:120)
> at java.lang.Thread.run(Thread.java:745)
> 2017-01-17 16:42:17,914 INFO  [AsyncDispatcher ShutDown handler] 
> event.AsyncDispatcher (AsyncDispatcher.java:run(303)) - Exiting, bbye..{code}
> The same stack i was also noticed in {{TestResourceTrackerOnHA}} exits 
> abnormally, after some analysis, i was able to reproduce.
> Once the nodeHeartBeat is sent to RM, inside 
> {{org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService.nodeHeartbeat(NodeHeartbeatRequest)}},
>  before sending it to dispatcher through
> {{this.rmContext.getDispatcher().getEventHandler().handle(nodeStatusEvent);}} 
> if RM failover is called, the dispatcher is reset
> The new dispatcher is however first started and then the events are 
> registered at 
> {{org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.reinitialize(boolean)}}
> So event order will look like
> 1. Send Node heartbeat to {{ResourceTrackerService}}
> 2. In {{ResourceTrackerService.nodeHeartbeat}}, before passing to dispatcher 
> call RM failover
> 3. In RM Failover, current active will reset dispatcher @reinitialize i.e ( 
> {{resetDispatcher();}} + {{createAndInitActiveServices();}} )
> Now between {{resetDispatcher();}} and {{createAndInitActiveServices();}} , 
> the {{ResourceTrackerService.nodeHeartbeat}} invokes dipatcher
> This will cause the above error as at point of time when {{STATUS_UPDATE}} 
> event is given to dispatcher in {{ResourceTrackerService}} , the new 
> dispatcher(from the failover) may be started but not yet registered for events
> Using same steps(with pausing JVM at debug), i was able to reproduce this in 
> production cluster also. for {{STATUS_UPDATE}} active service event, when the 
> service is yet to forward the event to RM dispatcher but a failover is called 
> and dispatcher reset is between {{resetDispatcher();}} & 
> {{createAndInitActiveServices();}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6307) Refactor FairShareComparator#compare

2017-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099571#comment-16099571
 ] 

Hadoop QA commented on YARN-6307:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 2 unchanged - 6 fixed = 2 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m  8s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 1s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
|   | 
org.apache.hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA |
|   | org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6307 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878696/YARN-6307.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 78e57f813648 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c98201b |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16537/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16537/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-6804) Allow custom hostname for docker containers in native services

2017-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099573#comment-16099573
 ] 

Hadoop QA commented on YARN-6804:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-client-modules/hadoop-client-minicluster {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
53s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
52s{color} | {color:green} root generated 0 new + 1313 unchanged - 2 fixed = 
1313 total (was 1315) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 58s{color} | {color:orange} root: The patch generated 1 new + 58 unchanged - 
0 fixed = 59 total (was 58) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-client-modules/hadoop-client-minicluster {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
13s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-client-minicluster in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6804 |
| JIRA Patch URL | 

[jira] [Updated] (YARN-5548) Use MockRMMemoryStateStore to reduce test failures

2017-07-25 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5548:
---
Attachment: YARN-5548.0017.patch

> Use MockRMMemoryStateStore to reduce test failures
> --
>
> Key: YARN-5548
> URL: https://issues.apache.org/jira/browse/YARN-5548
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-easy, test
> Attachments: YARN-5548.0001.patch, YARN-5548.0002.patch, 
> YARN-5548.0003.patch, YARN-5548.0004.patch, YARN-5548.0005.patch, 
> YARN-5548.0006.patch, YARN-5548.0007.patch, YARN-5548.0008.patch, 
> YARN-5548.0009.patch, YARN-5548.0010.patch, YARN-5548.0011.patch, 
> YARN-5548.0012.patch, YARN-5548.0013.patch, YARN-5548.0014.patch, 
> YARN-5548.0015.patch, YARN-5548.0016.patch, YARN-5548.0017.patch
>
>
> https://builds.apache.org/job/PreCommit-YARN-Build/12850/testReport/org.apache.hadoop.yarn.server.resourcemanager/TestRMRestart/testFinishedAppRemovalAfterRMRestart/
> {noformat}
> Error Message
> Stacktrace
> java.lang.AssertionError: expected null, but was: application_submission_context { application_id { id: 1 cluster_timestamp: 
> 1471885197388 } application_name: "" queue: "default" priority { priority: 0 
> } am_container_spec { } cancel_tokens_when_complete: true maxAppAttempts: 2 
> resource { memory: 1024 virtual_cores: 1 } applicationType: "YARN" 
> keep_containers_across_application_attempts: false 
> attempt_failures_validity_interval: 0 am_container_resource_request { 
> priority { priority: 0 } resource_name: "*" capability { memory: 1024 
> virtual_cores: 1 } num_containers: 0 relax_locality: true 
> node_label_expression: "" execution_type_request { execution_type: GUARANTEED 
> enforce_execution_type: false } } } user: "jenkins" start_time: 1471885197417 
> application_state: RMAPP_FINISHED finish_time: 1471885197478>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testFinishedAppRemovalAfterRMRestart(TestRMRestart.java:1656)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6150) TestContainerManagerSecurity tests for Yarn Server are flakey

2017-07-25 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057150#comment-16057150
 ] 

Akira Ajisaka edited comment on YARN-6150 at 7/25/17 6:20 AM:
--

If the hostname does not contains dot or is not localhost, 
NetUtils.createSocketAddr calls SecurityUtil.getByNameWithSearch(String) and it 
adds trailing dot to the hostname (please see Javadoc of 
SecurityUtil.getByExactName for the detail) and then resolve the hostname. In 
the environment, "af73ca3dfb64." (with trailing dot) cannot be resolved, so the 
error occurs.

Replacing the above code with
{code:title=TestContainerManagerSecurity#getContainerManagementProtocolProxy}
final InetSocketAddress addr =
new InetSocketAddress(nodeId.getHost(), nodeId.getPort());
{code}
fixes the error.


was (Author: ajisakaa):
If the hostname does not contains dot or is not localhost, 
NetUtils.createSocketAddr calls SecurityUtils.getByNameWithSearch(String) and 
it adds trailing dot to the hostname (please see Javadoc of  
NetUtils.getByExactName for the detail) and then resolve the hostname. In the 
environment, "af73ca3dfb64." (with trailing dot) cannot be resolved, so the 
error occurs.

Replacing the above code with
{code:title=TestContainerManagerSecurity#getContainerManagementProtocolProxy}
final InetSocketAddress addr =
new InetSocketAddress(nodeId.getHost(), nodeId.getPort());
{code}
fixes the error.

> TestContainerManagerSecurity tests for Yarn Server are flakey
> -
>
> Key: YARN-6150
> URL: https://issues.apache.org/jira/browse/YARN-6150
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Daniel Sturman
>Assignee: Daniel Sturman
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6150.001.patch, YARN-6150.002.patch, 
> YARN-6150.003.patch, YARN-6150.004.patch, YARN-6150.005.patch, 
> YARN-6150.006.patch, YARN-6150.007.patch
>
>
> Repeated runs of 
> {{org.apache.hadoop.yarn.server.TestContainerManagedSecurity}} can either 
> pass or fail on repeated runs on the same codebase.  Also, the two runs (one 
> in secure mode, one without security) aren't well labeled in JUnit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4522) Queue acl can be checked at app submission

2017-07-25 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099555#comment-16099555
 ] 

Bibin A Chundatt commented on YARN-4522:


[~jianhe]

One clarification regarding this change .

If {{YARN_ACL_ENABLE}} if false , {{ClientRMService#getQueueUserAcls}} result 
and submission check are not in sync. IIUC {{ClientRMService#getQueueUserAcls}} 
returns the effective ACL for queue, if queue ACL shows submit permission is 
not available for user then submission should fail.

For example default queue {{getQueueUserAcls}} effective will show no rights to 
submit application but will be able to submit application.
IMHO we should keep both in sync.

> Queue acl can be checked at app submission
> --
>
> Key: YARN-4522
> URL: https://issues.apache.org/jira/browse/YARN-4522
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: YARN-4522.1.patch, YARN-4522.2.patch, YARN-4522.3.patch, 
> YARN-4522.4.patch
>
>
> Queue acl check is currently asynchronously done at 
> CapacityScheduler#addApplication, this could be done right at submission.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6822) TestContainerManagerSecurity tests fail on trunk

2017-07-25 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved YARN-6822.
-
Resolution: Duplicate

Fixed by YARN-6150. Please feel free to reopen this if the error is not fixed.

> TestContainerManagerSecurity tests fail on trunk
> 
>
> Key: YARN-6822
> URL: https://issues.apache.org/jira/browse/YARN-6822
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
> Environment: Ubuntu 14.04 
> x86, ppc64le
> $ java -version
> openjdk version "1.8.0_111"
> OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14)
> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>Reporter: Sonia Garudi
>
> {code}
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager[0]
> java.lang.Exception: test timed out after 12 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.processWaitTimeAndRetryInfo(RetryInvocationHandler.java:130)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:107)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:348)
>   at com.sun.proxy.$Proxy91.startContainers(Unknown Source)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.startContainer(TestContainerManagerSecurity.java:557)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testStartContainer(TestContainerManagerSecurity.java:478)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:253)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:158)
> {code}
> {code}
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager[1]
> java.lang.Exception: test timed out after 12 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.processWaitTimeAndRetryInfo(RetryInvocationHandler.java:130)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:107)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:348)
>   at com.sun.proxy.$Proxy91.startContainers(Unknown Source)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.startContainer(TestContainerManagerSecurity.java:557)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testStartContainer(TestContainerManagerSecurity.java:478)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:253)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:158)
> {code}
> Logs -
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/463/testReport/org.apache.hadoop.yarn.server/TestContainerManagerSecurity/testContainerManager_0_/
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/463/testReport/org.apache.hadoop.yarn.server/TestContainerManagerSecurity/testContainerManager_1_/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6150) TestContainerManagerSecurity tests for Yarn Server are flakey

2017-07-25 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-6150:

Component/s: (was: yarn)
 test

> TestContainerManagerSecurity tests for Yarn Server are flakey
> -
>
> Key: YARN-6150
> URL: https://issues.apache.org/jira/browse/YARN-6150
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Daniel Sturman
>Assignee: Daniel Sturman
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6150.001.patch, YARN-6150.002.patch, 
> YARN-6150.003.patch, YARN-6150.004.patch, YARN-6150.005.patch, 
> YARN-6150.006.patch, YARN-6150.007.patch
>
>
> Repeated runs of 
> {{org.apache.hadoop.yarn.server.TestContainerManagedSecurity}} can either 
> pass or fail on repeated runs on the same codebase.  Also, the two runs (one 
> in secure mode, one without security) aren't well labeled in JUnit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6864) FSPreemptionThread cleanup for readability

2017-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099549#comment-16099549
 ] 

Hadoop QA commented on YARN-6864:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 24s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6864 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878684/YARN-6864.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c33cba0822e7 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c98201b |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16536/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16536/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16536/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FSPreemptionThread cleanup for readability
> --
>
> Key: YARN-6864
>