[jira] [Commented] (YARN-6971) Clean up different ways to create resources

2017-08-09 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119505#comment-16119505
 ] 

Yufei Gu commented on YARN-6971:


Sounds good to me. Feel free to close this or convert it to subtask.

> Clean up different ways to create resources
> ---
>
> Key: YARN-6971
> URL: https://issues.apache.org/jira/browse/YARN-6971
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager, scheduler
>Reporter: Yufei Gu
>Priority: Minor
>  Labels: newbie
>
> There are several ways to create a {{resource}} object, e.g., 
> BuilderUtils.newResource() and Resources.createResource(). These methods not 
> only cause confusing but also performance issues, for example 
> BuilderUtils.newResource() is significant slow than 
> Resources.createResource(). 
> We could merge them some how, and replace most BuilderUtils.newResource() 
> with Resources.createResource().



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6903) Yarn-native-service framework core rewrite

2017-08-09 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6903:
--
Attachment: YARN-6903.yarn-native-services.04.patch

> Yarn-native-service framework core rewrite
> --
>
> Key: YARN-6903
> URL: https://issues.apache.org/jira/browse/YARN-6903
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6903.yarn-native-services.01.patch, 
> YARN-6903.yarn-native-services.02.patch, 
> YARN-6903.yarn-native-services.03.patch, 
> YARN-6903.yarn-native-services.04.patch
>
>
> There are some new features like rich placement scheduling, container auto 
> restart, container upgrade in YARN core that can be taken advantage by the 
> native-service framework. Besides, there are quite a lot legacy code which 
> are no longer required. 
> So we decide to rewrite the core part to have a leaner codebase and make use 
> of various advanced features in YARN. 
> And the new code design will be in align with what we have designed for the 
> service API YARN-4793



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6323) Rolling upgrade/config change is broken on timeline v2.

2017-08-09 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119508#comment-16119508
 ] 

Rohith Sharma K S commented on YARN-6323:
-

thanks Vrushali for reinitiating this thread. However YARN-6736 is planning to 
write into both v1 and v2 timelines during upgrade. I think we should make use 
of it during rolling upgrade so that RM will publish data into v1 and v2. On NM 
restart during upgrade, it is OK for inconsistencies for running applications. 
But for newer applications, it should be published properly. 

> Rolling upgrade/config change is broken on timeline v2. 
> 
>
> Key: YARN-6323
> URL: https://issues.apache.org/jira/browse/YARN-6323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6323.001.patch
>
>
> Found this issue when deploying on real clusters. If there are apps running 
> when we enable timeline v2 (with work preserving restart enabled), node 
> managers will fail to start due to missing app context data. We should 
> probably assign some default names to these "left over" apps. I believe it's 
> suboptimal to let users clean up the whole cluster before enabling timeline 
> v2. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6975) Moving logging APIs over to slf4j in hadoop-yarn-server-tests, hadoop-yarn-server-web-proxy and hadoop-yarn-server-router

2017-08-09 Thread Yeliang Cang (JIRA)
Yeliang Cang created YARN-6975:
--

 Summary: Moving logging APIs over to slf4j in 
hadoop-yarn-server-tests, hadoop-yarn-server-web-proxy and 
hadoop-yarn-server-router
 Key: YARN-6975
 URL: https://issues.apache.org/jira/browse/YARN-6975
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Yeliang Cang
Assignee: Yeliang Cang






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6975) Moving logging APIs over to slf4j in hadoop-yarn-server-tests, hadoop-yarn-server-web-proxy and hadoop-yarn-server-router

2017-08-09 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-6975:
---
Attachment: YARN-6975.001.patch

> Moving logging APIs over to slf4j in hadoop-yarn-server-tests, 
> hadoop-yarn-server-web-proxy and hadoop-yarn-server-router
> -
>
> Key: YARN-6975
> URL: https://issues.apache.org/jira/browse/YARN-6975
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Attachments: YARN-6975.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6975) Moving logging APIs over to slf4j in hadoop-yarn-server-tests, hadoop-yarn-server-web-proxy and hadoop-yarn-server-router

2017-08-09 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119515#comment-16119515
 ] 

Yeliang Cang commented on YARN-6975:


LOG is unused in AppReportFetcher.java, and remove it.

> Moving logging APIs over to slf4j in hadoop-yarn-server-tests, 
> hadoop-yarn-server-web-proxy and hadoop-yarn-server-router
> -
>
> Key: YARN-6975
> URL: https://issues.apache.org/jira/browse/YARN-6975
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Attachments: YARN-6975.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6361) FairScheduler: FSLeafQueue.fetchAppsWithDemand CPU usage is high with big queues

2017-08-09 Thread YunFan Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YunFan Zhou updated YARN-6361:
--
Attachment: YARN-6361.001.patch

> FairScheduler: FSLeafQueue.fetchAppsWithDemand CPU usage is high with big 
> queues
> 
>
> Key: YARN-6361
> URL: https://issues.apache.org/jira/browse/YARN-6361
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Miklos Szegedi
>Assignee: YunFan Zhou
> Attachments: dispatcherthread.png, threads.png, YARN-6361.001.patch, 
> YARN-6361.001.pre.patch
>
>
> FSLeafQueue.fetchAppsWithDemand sorts the applications by the current policy. 
> Most of the time is spent in FairShareComparator.compare. We could improve 
> this by doing the calculations outside the sort loop {{(O\(n\))}} and we 
> sorted by a fixed number inside instead {{O(n*log\(n\))}}. This could be an 
> performance issue when there are huge number of applications in a single 
> queue. The attachments shows the performance impact when there are 10k 
> applications in one queue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-6361) FairScheduler: FSLeafQueue.fetchAppsWithDemand CPU usage is high with big queues

2017-08-09 Thread YunFan Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YunFan Zhou updated YARN-6361:
--
Comment: was deleted

(was: For personal environmental reasons,  I uploaded an incomplete patch. And 
I will upload the complete patch later. 

[~yufeigu] [~Naganarasimha],  can you give me some suggestions for the 
corresponding code, thank you very much.)

> FairScheduler: FSLeafQueue.fetchAppsWithDemand CPU usage is high with big 
> queues
> 
>
> Key: YARN-6361
> URL: https://issues.apache.org/jira/browse/YARN-6361
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Miklos Szegedi
>Assignee: YunFan Zhou
> Attachments: dispatcherthread.png, threads.png, YARN-6361.001.patch
>
>
> FSLeafQueue.fetchAppsWithDemand sorts the applications by the current policy. 
> Most of the time is spent in FairShareComparator.compare. We could improve 
> this by doing the calculations outside the sort loop {{(O\(n\))}} and we 
> sorted by a fixed number inside instead {{O(n*log\(n\))}}. This could be an 
> performance issue when there are huge number of applications in a single 
> queue. The attachments shows the performance impact when there are 10k 
> applications in one queue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6361) FairScheduler: FSLeafQueue.fetchAppsWithDemand CPU usage is high with big queues

2017-08-09 Thread YunFan Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YunFan Zhou updated YARN-6361:
--
Attachment: (was: YARN-6361.001.pre.patch)

> FairScheduler: FSLeafQueue.fetchAppsWithDemand CPU usage is high with big 
> queues
> 
>
> Key: YARN-6361
> URL: https://issues.apache.org/jira/browse/YARN-6361
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Miklos Szegedi
>Assignee: YunFan Zhou
> Attachments: dispatcherthread.png, threads.png, YARN-6361.001.patch
>
>
> FSLeafQueue.fetchAppsWithDemand sorts the applications by the current policy. 
> Most of the time is spent in FairShareComparator.compare. We could improve 
> this by doing the calculations outside the sort loop {{(O\(n\))}} and we 
> sorted by a fixed number inside instead {{O(n*log\(n\))}}. This could be an 
> performance issue when there are huge number of applications in a single 
> queue. The attachments shows the performance impact when there are 10k 
> applications in one queue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-6361) FairScheduler: FSLeafQueue.fetchAppsWithDemand CPU usage is high with big queues

2017-08-09 Thread YunFan Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YunFan Zhou updated YARN-6361:
--
Comment: was deleted

(was: [~yufeigu] [~Naganarasimha] Hi, Yufei, Naganarasimha.
I've found that this optimization can improve performance by about 30 percent. 
I will upload the full patch tomorrow.)

> FairScheduler: FSLeafQueue.fetchAppsWithDemand CPU usage is high with big 
> queues
> 
>
> Key: YARN-6361
> URL: https://issues.apache.org/jira/browse/YARN-6361
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Miklos Szegedi
>Assignee: YunFan Zhou
> Attachments: dispatcherthread.png, threads.png, YARN-6361.001.patch
>
>
> FSLeafQueue.fetchAppsWithDemand sorts the applications by the current policy. 
> Most of the time is spent in FairShareComparator.compare. We could improve 
> this by doing the calculations outside the sort loop {{(O\(n\))}} and we 
> sorted by a fixed number inside instead {{O(n*log\(n\))}}. This could be an 
> performance issue when there are huge number of applications in a single 
> queue. The attachments shows the performance impact when there are 10k 
> applications in one queue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-6361) FairScheduler: FSLeafQueue.fetchAppsWithDemand CPU usage is high with big queues

2017-08-09 Thread YunFan Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YunFan Zhou updated YARN-6361:
--
Comment: was deleted

(was: [~yufeigu]  Thank you very much, Yufei. 
I have to say that this JIRA's description describes a very good idea, and I 
have achieved the corresponding code.
I used *O(N)* complexity before sorting to calculate the properties of each 
application,  and then according to the computed properties of each application 
ordering all applications using O(n*log(n)) complexity.
It does look a lot faster. And I'll test the bench mark tomorrow, and see how 
the optimization of this approach improves performance compared to the 
unoptimized performance.)

> FairScheduler: FSLeafQueue.fetchAppsWithDemand CPU usage is high with big 
> queues
> 
>
> Key: YARN-6361
> URL: https://issues.apache.org/jira/browse/YARN-6361
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Miklos Szegedi
>Assignee: YunFan Zhou
> Attachments: dispatcherthread.png, threads.png, YARN-6361.001.patch
>
>
> FSLeafQueue.fetchAppsWithDemand sorts the applications by the current policy. 
> Most of the time is spent in FairShareComparator.compare. We could improve 
> this by doing the calculations outside the sort loop {{(O\(n\))}} and we 
> sorted by a fixed number inside instead {{O(n*log\(n\))}}. This could be an 
> performance issue when there are huge number of applications in a single 
> queue. The attachments shows the performance impact when there are 10k 
> applications in one queue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6361) FairScheduler: FSLeafQueue.fetchAppsWithDemand CPU usage is high with big queues

2017-08-09 Thread YunFan Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119528#comment-16119528
 ] 

YunFan Zhou commented on YARN-6361:
---

[~yufeigu][~Naganarasimha] Hi, Yufei, Naganarasimha. Could you please help me 
review my code? Thank you.

> FairScheduler: FSLeafQueue.fetchAppsWithDemand CPU usage is high with big 
> queues
> 
>
> Key: YARN-6361
> URL: https://issues.apache.org/jira/browse/YARN-6361
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Miklos Szegedi
>Assignee: YunFan Zhou
> Attachments: dispatcherthread.png, threads.png, YARN-6361.001.patch
>
>
> FSLeafQueue.fetchAppsWithDemand sorts the applications by the current policy. 
> Most of the time is spent in FairShareComparator.compare. We could improve 
> this by doing the calculations outside the sort loop {{(O\(n\))}} and we 
> sorted by a fixed number inside instead {{O(n*log\(n\))}}. This could be an 
> performance issue when there are huge number of applications in a single 
> queue. The attachments shows the performance impact when there are 10k 
> applications in one queue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6971) Clean up different ways to create resources

2017-08-09 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6971:
--
Issue Type: Sub-task  (was: Improvement)
Parent: YARN-3926

> Clean up different ways to create resources
> ---
>
> Key: YARN-6971
> URL: https://issues.apache.org/jira/browse/YARN-6971
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Yufei Gu
>Priority: Minor
>  Labels: newbie
>
> There are several ways to create a {{resource}} object, e.g., 
> BuilderUtils.newResource() and Resources.createResource(). These methods not 
> only cause confusing but also performance issues, for example 
> BuilderUtils.newResource() is significant slow than 
> Resources.createResource(). 
> We could merge them some how, and replace most BuilderUtils.newResource() 
> with Resources.createResource().



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6971) Clean up different ways to create resources

2017-08-09 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119536#comment-16119536
 ] 

Sunil G commented on YARN-6971:
---

Thanks [~yufeigu]. converted as a sub-task

> Clean up different ways to create resources
> ---
>
> Key: YARN-6971
> URL: https://issues.apache.org/jira/browse/YARN-6971
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Yufei Gu
>Priority: Minor
>  Labels: newbie
>
> There are several ways to create a {{resource}} object, e.g., 
> BuilderUtils.newResource() and Resources.createResource(). These methods not 
> only cause confusing but also performance issues, for example 
> BuilderUtils.newResource() is significant slow than 
> Resources.createResource(). 
> We could merge them some how, and replace most BuilderUtils.newResource() 
> with Resources.createResource().



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6885) AllocationFileLoaderService.loadQueue() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119539#comment-16119539
 ] 

Hadoop QA commented on YARN-6885:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 190 new + 16 unchanged - 2 fixed = 206 total (was 18) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 87 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
13s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 27s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Impossible cast from Double to Float in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.loadQueue(String,
 Element, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, 
Set, Set)  At AllocationFileLoaderService.java:Float in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.loadQueue(String,
 Element, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, 
Set, Set)  At AllocationFileLoaderService.java:[line 544] |
|  |  Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.loadQueue(String,
 Element, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, 
Set, Set)  At AllocationFileLoaderService.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.Allo

[jira] [Commented] (YARN-6953) Clean up ResourceUtils.setMinimumAllocationForMandatoryResources() and setMaximumAllocationForMandatoryResources()

2017-08-09 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119591#comment-16119591
 ] 

Manikandan R commented on YARN-6953:


[~sunilg] Thanks for review.

Agree with your comments. Had similar thought (to avoid having same code) while 
doing changes, fluctuated back and forth for sometime :) and settled down with 
the patch later assuming mandatory resources count is always going to be 2.  
Will clean it up.

> Clean up ResourceUtils.setMinimumAllocationForMandatoryResources() and 
> setMaximumAllocationForMandatoryResources()
> --
>
> Key: YARN-6953
> URL: https://issues.apache.org/jira/browse/YARN-6953
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6953-YARN-3926.001.patch, 
> YARN-6953-YARN-3926.002.patch, YARN-6953-YARN-3926.003.patch, 
> YARN-6953-YARN-3926.004.patch
>
>
> The {{setMinimumAllocationForMandatoryResources()}} and 
> {{setMaximumAllocationForMandatoryResources()}} methods are quite convoluted. 
>  They'd be much simpler if they just handled CPU and memory manually instead 
> of trying to be clever about doing it in a loop.  There are also issues, such 
> as the log warning always talking about memory or the last element of the 
> inner array being a copy of the first element.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6852) [YARN-6223] Native code changes to support isolate GPU devices by using CGroups

2017-08-09 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119593#comment-16119593
 ] 

Sunil G commented on YARN-6852:
---

Thanks [~leftnoteasy] for the patch.

Few minor comments:
In {{get_numbers_split_by_comma}}
# its better to pass {{input}} as const
# in below code
{code}
67while (p != NULL) {
68  int n = strtol(p, NULL, 0);
69  n_numbers++;
{code}
If {{strtol}} fails, we need to check {{errno}} and then process the input, 
correct?
# One more doubt
{code}
106   // Use cgroup helpers to blacklist devices
107   for (int i = 0; i < n_minor_devices_to_block; i++) {
108 char param_value[128];
109 snprintf(param_value, sizeof(param_value), "c %d:%d rwm",
110  major_device_number, i);
{code}
Is {{param_value}} null terminated after the printf?
# One more small suggestion
{{update_cgroups_parameters_func_p}} takes input like "devices" or "deny". Is 
it better to define all such "entities" and "verbs" in to a common include and 
use as macro?

> [YARN-6223] Native code changes to support isolate GPU devices by using 
> CGroups
> ---
>
> Key: YARN-6852
> URL: https://issues.apache.org/jira/browse/YARN-6852
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6852.001.patch, YARN-6852.002.patch, 
> YARN-6852.003.patch, YARN-6852.004.patch
>
>
> This JIRA plan to add support of:
> 1) Isolation in CGroups. (native side).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6953) Clean up ResourceUtils.setMinimumAllocationForMandatoryResources() and setMaximumAllocationForMandatoryResources()

2017-08-09 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119591#comment-16119591
 ] 

Manikandan R edited comment on YARN-6953 at 8/9/17 9:00 AM:


[~sunilg] Thanks for review.

Agree with your comments. Had similar thought (not exactly, but to avoid having 
same code) while doing changes, fluctuated back and forth for sometime :) and 
settled down with the patch later assuming mandatory resources count is always 
going to be 2.  Will clean it up.


was (Author: maniraj...@gmail.com):
[~sunilg] Thanks for review.

Agree with your comments. Had similar thought (to avoid having same code) while 
doing changes, fluctuated back and forth for sometime :) and settled down with 
the patch later assuming mandatory resources count is always going to be 2.  
Will clean it up.

> Clean up ResourceUtils.setMinimumAllocationForMandatoryResources() and 
> setMaximumAllocationForMandatoryResources()
> --
>
> Key: YARN-6953
> URL: https://issues.apache.org/jira/browse/YARN-6953
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6953-YARN-3926.001.patch, 
> YARN-6953-YARN-3926.002.patch, YARN-6953-YARN-3926.003.patch, 
> YARN-6953-YARN-3926.004.patch
>
>
> The {{setMinimumAllocationForMandatoryResources()}} and 
> {{setMaximumAllocationForMandatoryResources()}} methods are quite convoluted. 
>  They'd be much simpler if they just handled CPU and memory manually instead 
> of trying to be clever about doing it in a loop.  There are also issues, such 
> as the log warning always talking about memory or the last element of the 
> inner array being a copy of the first element.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6874) TestHBaseStorageFlowRun.testWriteFlowRunMinMax fails intermittently

2017-08-09 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119604#comment-16119604
 ] 

Varun Saxena commented on YARN-6874:


[~vrushalic], got a chance to look at the failure.
This is coming because we are not supplementing the timestamp for FlowRunColumn 
i.e. min_start_time and max_end_time columns, post YARN-6850 which can lead to 
a clash if 2 writes for app created events happen at the same time, which is 
true for this test case.

To fix this, we need to pass true flag into ColumnHelper constructor. I did 
encounter this failure once earlier too. Not sure why but now the failure is 
far more repetitive due to above issue. If the issue is reproduced again, we 
can analyse and fix it later. 

> TestHBaseStorageFlowRun.testWriteFlowRunMinMax fails intermittently
> ---
>
> Key: YARN-6874
> URL: https://issues.apache.org/jira/browse/YARN-6874
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Vrushali C
>
> {noformat}
> testWriteFlowRunMinMax(org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun)
>   Time elapsed: 0.088 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<142502690> but was:<1425026901000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun.testWriteFlowRunMinMax(TestHBaseStorageFlowRun.java:237)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6133) [ATSv2 Security] Renew delegation token for app automatically if an app collector is active

2017-08-09 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-6133:
---
Attachment: YARN-6133-YARN-5355.04.patch

Thanks [~jianhe] for the review. Attaching a patch addressing your comments.
There can be a minor race in the test because for low values of token renew 
interval i.e. less than 10 seconds, we renew the token after expiry of token 
(which won't be the case in real cluster).
This race has been fixed in YARN-6134 though.

> [ATSv2 Security] Renew delegation token for app automatically if an app 
> collector is active
> ---
>
> Key: YARN-6133
> URL: https://issues.apache.org/jira/browse/YARN-6133
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6133-YARN-5355.01.patch, 
> YARN-6133-YARN-5355.02.patch, YARN-6133-YARN-5355.03.patch, 
> YARN-6133-YARN-5355.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6881) LOG is unused in AllocationConfiguration

2017-08-09 Thread weiyuan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16115846#comment-16115846
 ] 

weiyuan edited comment on YARN-6881 at 8/9/17 9:47 AM:
---

Hi [~templedf], I would try this issue, could you help for assigning to me. 
Thanks.



was (Author: v123582):
Hi [~templedf], I can try this issue, but I am a newbie to upload patch to this 
issue without permissions. Can you help for assigning to me. Thanks.


> LOG is unused in AllocationConfiguration
> 
>
> Key: YARN-6881
> URL: https://issues.apache.org/jira/browse/YARN-6881
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>  Labels: newbie
>
> The variable can be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-6881) LOG is unused in AllocationConfiguration

2017-08-09 Thread weiyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

weiyuan updated YARN-6881:
--
Comment: was deleted

(was: Hi [~templedf], I would try this issue, could you help for assigning to 
me. Thanks.
)

> LOG is unused in AllocationConfiguration
> 
>
> Key: YARN-6881
> URL: https://issues.apache.org/jira/browse/YARN-6881
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>  Labels: newbie
>
> The variable can be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6881) LOG is unused in AllocationConfiguration

2017-08-09 Thread weiyuan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119644#comment-16119644
 ] 

weiyuan commented on YARN-6881:
---

Hi Daniel, I would try this issue, could you help for assigning to me. Thanks. 
(I mistakenly deleted former message.)

> LOG is unused in AllocationConfiguration
> 
>
> Key: YARN-6881
> URL: https://issues.apache.org/jira/browse/YARN-6881
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>  Labels: newbie
>
> The variable can be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6881) LOG is unused in AllocationConfiguration

2017-08-09 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G reassigned YARN-6881:
-

Assignee: weiyuan

> LOG is unused in AllocationConfiguration
> 
>
> Key: YARN-6881
> URL: https://issues.apache.org/jira/browse/YARN-6881
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: weiyuan
>  Labels: newbie
>
> The variable can be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6881) LOG is unused in AllocationConfiguration

2017-08-09 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119652#comment-16119652
 ] 

Sunil G commented on YARN-6881:
---

Added [~v123582] as a contributor and assigned this issue.

> LOG is unused in AllocationConfiguration
> 
>
> Key: YARN-6881
> URL: https://issues.apache.org/jira/browse/YARN-6881
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: weiyuan
>  Labels: newbie
>
> The variable can be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6881) LOG is unused in AllocationConfiguration

2017-08-09 Thread weiyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

weiyuan updated YARN-6881:
--
Attachment: YARN-6881.001.patch

> LOG is unused in AllocationConfiguration
> 
>
> Key: YARN-6881
> URL: https://issues.apache.org/jira/browse/YARN-6881
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: weiyuan
>  Labels: newbie
> Attachments: YARN-6881.001.patch
>
>
> The variable can be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6881) LOG is unused in AllocationConfiguration

2017-08-09 Thread weiyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

weiyuan updated YARN-6881:
--
Attachment: (was: YARN-6881.001.patch)

> LOG is unused in AllocationConfiguration
> 
>
> Key: YARN-6881
> URL: https://issues.apache.org/jira/browse/YARN-6881
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: weiyuan
>  Labels: newbie
> Attachments: YARN-6881.001.patch
>
>
> The variable can be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6881) LOG is unused in AllocationConfiguration

2017-08-09 Thread weiyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

weiyuan updated YARN-6881:
--
Attachment: YARN-6881.001.patch

> LOG is unused in AllocationConfiguration
> 
>
> Key: YARN-6881
> URL: https://issues.apache.org/jira/browse/YARN-6881
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: weiyuan
>  Labels: newbie
> Attachments: YARN-6881.001.patch
>
>
> The variable can be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6975) Moving logging APIs over to slf4j in hadoop-yarn-server-tests, hadoop-yarn-server-web-proxy and hadoop-yarn-server-router

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119830#comment-16119830
 ] 

Hadoop QA commented on YARN-6975:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 8 new + 
77 unchanged - 4 fixed = 85 total (was 81) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
24s{color} | {color:green} hadoop-yarn-server-tests in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
57s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6975 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880972/YARN-6975.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 39025ab9be22 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/p

[jira] [Commented] (YARN-6361) FairScheduler: FSLeafQueue.fetchAppsWithDemand CPU usage is high with big queues

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119878#comment-16119878
 ] 

Hadoop QA commented on YARN-6361:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 46 unchanged - 0 fixed = 49 total (was 46) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 352 unchanged - 0 fixed = 353 total (was 352) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestMaxRunningAppsEnforcer |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppAttempt |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6361 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880973/YARN-6361.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9a9124c4160b 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8a4bff0 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16798/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/16798/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-proje

[jira] [Commented] (YARN-6515) Fix warnings from Spotbugs in hadoop-yarn-server-nodemanager

2017-08-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119889#comment-16119889
 ] 

Hudson commented on YARN-6515:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12152 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12152/])
YARN-6515. Fix warnings from Spotbugs in hadoop-yarn-server-nodemanager. 
(aajisaka: rev 1a18d5e514d13aa3a88e9b6089394a27296d6bc3)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainerMetrics.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ContainerLocalizer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java


> Fix warnings from Spotbugs in hadoop-yarn-server-nodemanager
> 
>
> Key: YARN-6515
> URL: https://issues.apache.org/jira/browse/YARN-6515
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6515.001.patch, YARN-6515.002.patch
>
>
> 5 find bugs issue was reported NM project as part of the YARN-4166 [build| 
> https://builds.apache.org/job/PreCommit-YARN-Build/15694/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html]
> Issue 1: 
>   
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
>  is a mutable collection which should be package protected
> Bug type MS_MUTABLE_COLLECTION_PKGPROTECT (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics
> Field 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
> At ContainerMetrics.java:\[line 134\]
> Issue 2:
>   
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
>  makes inefficient use of keySet iterator instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer
> In method 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
> Field 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.pendingResources
> At ContainerLocalizer.java:\[line 334\]
> Issue 3: 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
>  makes inefficient use of keySet iterator instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl
> In method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
> Field 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.recentlyStoppedContainers
> At NodeStatusUpdaterImpl.java:\[line 721\]
> Issue 4: 
> Hard coded reference to an absolute pathname in 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
> Bug type DMI_HARDCODED_ABSOLUTE_FILENAME (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime
> In method 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
> File name /sys/fs/cgroup
> At DockerLinuxContainerRuntime.java:\[line 455\]
> Useless object stored in variable removedNullContainers of method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
> Bug type UC_USELESS_OBJECT (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl
> In method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
> Value removedNullContainers
> Type java.util.HashSet
> At NodeStatusUpdaterImpl.java:\[line 644\]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6736) Consider writing to both ats v1 & v2 from RM for smoother upgrades

2017-08-09 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-6736:

Attachment: YARN-6736-YARN-5355.001.patch

> Consider writing to both ats v1 & v2 from RM for smoother upgrades
> --
>
> Key: YARN-6736
> URL: https://issues.apache.org/jira/browse/YARN-6736
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Rohith Sharma K S
> Attachments: YARN-6736-YARN-5355.001.patch
>
>
> When the cluster is being upgraded from atsv1 to v2, it may be good to have a 
> brief time period during which RM writes to both atsv1 and v2. This will help 
> frameworks like Tez migrate more smoothly. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6323) Rolling upgrade/config change is broken on timeline v2.

2017-08-09 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119911#comment-16119911
 ] 

Varun Saxena commented on YARN-6323:


bq.  However YARN-6736 is planning to write into both v1 and v2 timelines 
during upgrade. I think we should make use of it during rolling upgrade so that 
RM will publish data into v1 and v2. 
Makes sense to me if we are doing rolling upgrade from v1 to v2. This way 
transition would be seamless as the user can switch back to only v2 once he is 
sure that all the applications running at the time of previous switchover (when 
both v1 and v2 were configured) have completed. And he can trust the data in v2 
completely from a particular application onwards.
This would also be useful for those who want to try out v2 before they make a 
final decision to switch over to v2.

> Rolling upgrade/config change is broken on timeline v2. 
> 
>
> Key: YARN-6323
> URL: https://issues.apache.org/jira/browse/YARN-6323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6323.001.patch
>
>
> Found this issue when deploying on real clusters. If there are apps running 
> when we enable timeline v2 (with work preserving restart enabled), node 
> managers will fail to start due to missing app context data. We should 
> probably assign some default names to these "left over" apps. I believe it's 
> suboptimal to let users clean up the whole cluster before enabling timeline 
> v2. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6133) [ATSv2 Security] Renew delegation token for app automatically if an app collector is active

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119913#comment-16119913
 ] 

Hadoop QA commented on YARN-6133:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
53s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 36s{color} 
| {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.TestContainerManagerSecurity |
|   | hadoop.yarn.server.TestMiniYarnClusterNodeUtilization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6133 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880986/YARN-6133-YARN-5355.04.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2fc8bed1e584 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 

[jira] [Commented] (YARN-5534) Allow whitelisted volume mounts

2017-08-09 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119924#comment-16119924
 ] 

Varun Vasudev commented on YARN-5534:
-

It's going to end up being a combination. Some settings have to be done in the 
container-executor.cfg(like whitelisted volume mounts), and some will go into 
yarn-site.xml.

For example(just made up), an admin may want to mount /data-volume into every 
container by some subset of users. container-executor.cfg should have a setting 
permitting the mounting of /data-volume but yarn-site.xml should have feature 
to mount it into every container for those users. Does that make sense?

> Allow whitelisted volume mounts 
> 
>
> Key: YARN-5534
> URL: https://issues.apache.org/jira/browse/YARN-5534
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: luhuichun
>Assignee: Shane Kumpf
> Attachments: YARN-5534.001.patch, YARN-5534.002.patch, 
> YARN-5534.003.patch
>
>
> Introduction 
> Mounting files or directories from the host is one way of passing 
> configuration and other information into a docker container. 
> We could allow the user to set a list of mounts in the environment of 
> ContainerLaunchContext (e.g. /dir1:/targetdir1,/dir2:/targetdir2). 
> These would be mounted read-only to the specified target locations. This has 
> been resolved in YARN-4595
> 2.Problem Definition
> Bug mounting arbitrary volumes into a Docker container can be a security risk.
> 3.Possible solutions
> one approach to provide safe mounts is to allow the cluster administrator to 
> configure a set of parent directories as white list mounting directories.
>  Add a property named yarn.nodemanager.volume-mounts.white-list, when 
> container executor do mount checking, only the allowed directories or 
> sub-directories can be mounted. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6323) Rolling upgrade/config change is broken on timeline v2.

2017-08-09 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119931#comment-16119931
 ] 

Varun Saxena commented on YARN-6323:


Just to put it out there, should we differentiate between these 2 scenarios. I 
mean between momentary writing of entities to both v1 and v2 for rolling 
upgrade and writing to v1 and v2 for comparison or evaluation.
For rolling upgrade scenario, we can possibly write entities for running apps 
only to v1 and from new apps to v2 so we do not get incomplete app data for 
some apps from both v1 and v2.

However, most users may want to try out v2 for a while before they fully switch 
to it. And if we adopt the approach in the sentence above, we may lose data 
from v1, if the user decides to not take up v2.

> Rolling upgrade/config change is broken on timeline v2. 
> 
>
> Key: YARN-6323
> URL: https://issues.apache.org/jira/browse/YARN-6323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6323.001.patch
>
>
> Found this issue when deploying on real clusters. If there are apps running 
> when we enable timeline v2 (with work preserving restart enabled), node 
> managers will fail to start due to missing app context data. We should 
> probably assign some default names to these "left over" apps. I believe it's 
> suboptimal to let users clean up the whole cluster before enabling timeline 
> v2. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6736) Consider writing to both ats v1 & v2 from RM for smoother upgrades

2017-08-09 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119939#comment-16119939
 ] 

Varun Saxena commented on YARN-6736:


Final switch to v2 only would involve some kind of admin operation. We would 
have to switch off timeline server v1 for instance.
If AHS/ATSv1 server is running, for instance even though we have decided to 
switch to v2, client running the job may misconfigure v1 all across, get 
timeline delegation token from the server and the job would publish entities to 
timeline v1 while RM/NM would publish to v2. 
So, admin would need to shutdown ATSv1 server so that misconfiguration is 
caught at the time of job submission itself.



> Consider writing to both ats v1 & v2 from RM for smoother upgrades
> --
>
> Key: YARN-6736
> URL: https://issues.apache.org/jira/browse/YARN-6736
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Rohith Sharma K S
> Attachments: YARN-6736-YARN-5355.001.patch
>
>
> When the cluster is being upgraded from atsv1 to v2, it may be good to have a 
> brief time period during which RM writes to both atsv1 and v2. This will help 
> frameworks like Tez migrate more smoothly. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6881) LOG is unused in AllocationConfiguration

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119947#comment-16119947
 ] 

Hadoop QA commented on YARN-6881:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 8 unchanged - 0 fixed = 10 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 48s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6881 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881007/YARN-6881.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c8350d492a3b 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1a18d5e |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16800/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16800/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16800/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-s

[jira] [Commented] (YARN-6133) [ATSv2 Security] Renew delegation token for app automatically if an app collector is active

2017-08-09 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119946#comment-16119946
 ] 

Varun Saxena commented on YARN-6133:


Test failures are outstanding issues on trunk.

> [ATSv2 Security] Renew delegation token for app automatically if an app 
> collector is active
> ---
>
> Key: YARN-6133
> URL: https://issues.apache.org/jira/browse/YARN-6133
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6133-YARN-5355.01.patch, 
> YARN-6133-YARN-5355.02.patch, YARN-6133-YARN-5355.03.patch, 
> YARN-6133-YARN-5355.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6736) Consider writing to both ats v1 & v2 from RM for smoother upgrades

2017-08-09 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119950#comment-16119950
 ] 

Varun Saxena commented on YARN-6736:


Is this a merge blocker?

> Consider writing to both ats v1 & v2 from RM for smoother upgrades
> --
>
> Key: YARN-6736
> URL: https://issues.apache.org/jira/browse/YARN-6736
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Rohith Sharma K S
> Attachments: YARN-6736-YARN-5355.001.patch
>
>
> When the cluster is being upgraded from atsv1 to v2, it may be good to have a 
> brief time period during which RM writes to both atsv1 and v2. This will help 
> frameworks like Tez migrate more smoothly. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5534) Allow whitelisted volume mounts

2017-08-09 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119965#comment-16119965
 ] 

Eric Badger commented on YARN-5534:
---

bq. For example(just made up), an admin may want to mount /data-volume into 
every container by some subset of users. container-executor.cfg should have a 
setting permitting the mounting of /data-volume but yarn-site.xml should have 
feature to mount it into every container for those users. Does that make sense?
I still don't see why the overall mounting setting would be in 
container-executor.cfg while the user-specific setting would be in 
yarn-site.xml. If we're looking at this from a security perspective, the volume 
mount is either a potential attack vector or not. If it's not, then we don't 
really care whether anyone can mount it and then I would say we should just put 
everything in yarn-site.xml. If we assume that it is a potential attack vector, 
then we very much care that only certain users can mount that volume. In that 
case, I don't see why we would put that whitelist of users in yarn-site.xml, if 
we're also assuming that yarn-site.xml is potentially untrusted (I assume the 
reason we're putting things into container-executor.cfg is because it is only 
root read/writable). 

So basically my main points are:
1. If yarn-site.xml is untrusted, then we can't put any configs with potential 
security-related consequences in there (e.g. which volumes are whitelisted)
2. If yarn-site.xml is trusted, then I don't know why we need to move any of 
the configs into container-executor.cfg

> Allow whitelisted volume mounts 
> 
>
> Key: YARN-5534
> URL: https://issues.apache.org/jira/browse/YARN-5534
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: luhuichun
>Assignee: Shane Kumpf
> Attachments: YARN-5534.001.patch, YARN-5534.002.patch, 
> YARN-5534.003.patch
>
>
> Introduction 
> Mounting files or directories from the host is one way of passing 
> configuration and other information into a docker container. 
> We could allow the user to set a list of mounts in the environment of 
> ContainerLaunchContext (e.g. /dir1:/targetdir1,/dir2:/targetdir2). 
> These would be mounted read-only to the specified target locations. This has 
> been resolved in YARN-4595
> 2.Problem Definition
> Bug mounting arbitrary volumes into a Docker container can be a security risk.
> 3.Possible solutions
> one approach to provide safe mounts is to allow the cluster administrator to 
> configure a set of parent directories as white list mounting directories.
>  Add a property named yarn.nodemanager.volume-mounts.white-list, when 
> container executor do mount checking, only the allowed directories or 
> sub-directories can be mounted. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6969) Remove method getMinShareMemoryFraction and getPendingContainers in class FairSchedulerQueueInfo

2017-08-09 Thread Larry Lo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119993#comment-16119993
 ] 

Larry Lo commented on YARN-6969:


Hi, Yufei Gu. I'm a newbie and may I try to do this task as my starting at 
contributing to Hadoop?

> Remove method getMinShareMemoryFraction and getPendingContainers in class 
> FairSchedulerQueueInfo
> 
>
> Key: YARN-6969
> URL: https://issues.apache.org/jira/browse/YARN-6969
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: fairscheduler
>Reporter: Yufei Gu
>Priority: Trivial
>  Labels: newbie++
>
> They are not used anymore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-882) Specify per user quota for private/application cache and user log files

2017-08-09 Thread Rostislaw Krassow (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120014#comment-16120014
 ] 

Rostislaw Krassow commented on YARN-882:


I got the same issue in production. During execution of a heavy hive join (with 
mapreduce execution join) the according 
$yarn.nodemanager.local-dirs/usercache//appcache/ grow up. This 
led to elimination of the nodes by RM. 

The quotas for private/application cache should reflect resource quotas for the 
defined YARN queues.

> Specify per user quota for private/application cache and user log files
> ---
>
> Key: YARN-882
> URL: https://issues.apache.org/jira/browse/YARN-882
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Omkar Vinit Joshi
>Assignee: Omkar Vinit Joshi
>
> At present there is no limit on the number of files / size of the files 
> localized by single user. Similarly there is no limit on the size of the log 
> files created by user via running containers.
> We need to restrict the user for this.
> For LocalizedResources; this has serious concerns in case of secured 
> environment where malicious user can start one container and localize 
> resources whose total size >= DEFAULT_NM_LOCALIZER_CACHE_TARGET_SIZE_MB. 
> Thereafter it will either fail (if no extra space is present on disk) or 
> deletion service will keep removing localized files for other 
> containers/applications. 
> The limit for logs/localized resources should be decided by RM and sent to NM 
> via secured containerToken. All these configurations should per container 
> instead of per user or per nm.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6885) AllocationFileLoaderService.loadQueue() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-09 Thread Yu-Tang Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu-Tang Lin updated YARN-6885:
--
Attachment: YARN-6885.002.patch

> AllocationFileLoaderService.loadQueue() should use a switch statement in the 
> main tag parsing loop instead of the if/else-if/...
> 
>
> Key: YARN-6885
> URL: https://issues.apache.org/jira/browse/YARN-6885
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4
>
> Attachments: 0001-YARN-6885.patch, YARN-6885.002.patch
>
>
> {code}  if ("minResources".equals(field.getTagName())) {
> String text = ((Text)field.getFirstChild()).getData().trim();
> Resource val =
> FairSchedulerConfiguration.parseResourceConfigValue(text);
> minQueueResources.put(queueName, val);
>   } else if ("maxResources".equals(field.getTagName())) {
>   ...{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6958) Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice

2017-08-09 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120042#comment-16120042
 ] 

Akira Ajisaka commented on YARN-6958:
-

LGTM, +1

> Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice
> ---
>
> Key: YARN-6958
> URL: https://issues.apache.org/jira/browse/YARN-6958
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Attachments: YARN-6958.001.patch, YARN-6958.002.patch
>
>
> This jira moves logging APIS over to slf4j in the following modules:
> {code}
>  hadoop-yarn-server-timeline-pluginstorage 
> hadoop-yarn-server-timelineservice
> hadoop-yarn-server-timelineservice-hbase
> hadoop-yarn-server-timelineservice-hbase-tests 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6958) Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice

2017-08-09 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-6958:

Target Version/s: 2.9.0, 3.0.0-beta1
   Fix Version/s: 3.0.0-beta1

> Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice
> ---
>
> Key: YARN-6958
> URL: https://issues.apache.org/jira/browse/YARN-6958
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6958.001.patch, YARN-6958.002.patch
>
>
> This jira moves logging APIS over to slf4j in the following modules:
> {code}
>  hadoop-yarn-server-timeline-pluginstorage 
> hadoop-yarn-server-timelineservice
> hadoop-yarn-server-timelineservice-hbase
> hadoop-yarn-server-timelineservice-hbase-tests 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6958) Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice

2017-08-09 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120046#comment-16120046
 ] 

Akira Ajisaka commented on YARN-6958:
-

Committed this to trunk. Hi [~Cyl], would you provide a patch for branch-2?

> Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice
> ---
>
> Key: YARN-6958
> URL: https://issues.apache.org/jira/browse/YARN-6958
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6958.001.patch, YARN-6958.002.patch
>
>
> This jira moves logging APIS over to slf4j in the following modules:
> {code}
>  hadoop-yarn-server-timeline-pluginstorage 
> hadoop-yarn-server-timelineservice
> hadoop-yarn-server-timelineservice-hbase
> hadoop-yarn-server-timelineservice-hbase-tests 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6892) Improve API implementation in Resources and DominantResourceCalculator in align to ResourceInformation

2017-08-09 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6892:
--
Attachment: YARN-6892-YARN-3926.002.patch

Updating patch after addressing the comments. Thanks [~leftnoteasy]

> Improve API implementation in Resources and DominantResourceCalculator in 
> align to ResourceInformation
> --
>
> Key: YARN-6892
> URL: https://issues.apache.org/jira/browse/YARN-6892
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-6892-YARN-3926.001.patch, 
> YARN-6892-YARN-3926.002.patch
>
>
> In YARN-3926, apis in Resources and DRC spents significant cpu cycles in most 
> of its api. For better performance, its better to improve the apis as 
> resource types order is defined in system level (ResourceUtils class ensures 
> this post YARN-6788)
> This work is preceding to YARN-6788



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6958) Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice

2017-08-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120069#comment-16120069
 ] 

Hudson commented on YARN-6958:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12154 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12154/])
YARN-6958. Moving logging APIs over to slf4j in (aajisaka: rev 
63cfcb90ac6fbb79ba9ed6b3044cd999fc74e58c)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollector.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderServer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/FileSystemTimelineReaderImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/PerNodeTimelineCollectorsAuxService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderWebServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/application/ApplicationTable.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/apptoflow/AppToFlowTable.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowRunCoprocessor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowScanner.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/TimelineSchemaCreator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowRunTable.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowActivityTable.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/entity/EntityTable.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorWebService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/LevelDBCacheTimelineStore.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineReaderImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/filter/TimelineFilterUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/HBaseTimelineStorageUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/TimelineStorageUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/NodeTimelineCollectorManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/ColumnHelper.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/AppLevelTimelineCollecto

[jira] [Commented] (YARN-6736) Consider writing to both ats v1 & v2 from RM for smoother upgrades

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120088#comment-16120088
 ] 

Hadoop QA commented on YARN-6736:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
53s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
47s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} YARN-5355 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
5s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in YARN-5355 has 8 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 55s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 4 new + 249 unchanged - 0 fixed = 253 total (was 249) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 19s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestResourceTrackerService |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.TestRMHATimelineCollectors |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6736 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881018/YARN-6736-YARN-5355.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ec55c66db527 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 3088cfc |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |

[jira] [Commented] (YARN-6885) AllocationFileLoaderService.loadQueue() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120150#comment-16120150
 ] 

Hadoop QA commented on YARN-6885:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 186 new + 16 unchanged - 2 fixed = 202 total (was 18) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
6s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 35s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Impossible cast from Double to Float in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.loadQueue(String,
 Element, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, 
Set, Set)  At AllocationFileLoaderService.java:Float in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.loadQueue(String,
 Element, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, 
Set, Set)  At AllocationFileLoaderService.java:[line 544] |
|  |  Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.loadQueue(String,
 Element, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, 
Set, Set)  At AllocationFileLoaderService.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.loadQueue(String,
 Element, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, 
Set, Set)  At AllocationFileLoaderService.java:[line 560] |
| Failed junit tests | 
hadoop.yarn.serve

[jira] [Created] (YARN-6976) Some containers take a long time in KILLING state after the application is finished.

2017-08-09 Thread Aidi Pi (JIRA)
Aidi Pi created YARN-6976:
-

 Summary: Some containers take a long time in KILLING state after 
the application is finished.
 Key: YARN-6976
 URL: https://issues.apache.org/jira/browse/YARN-6976
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager, resourcemanager
Affects Versions: 2.7.3
 Environment: OS: Ubuntu 16.04, Java: JDK1.8, Docker: 
seqenceid/hadoop-2.4.0
Reporter: Aidi Pi


I use Docker as the container of YARN and ran Spark applications. In some runs, 
the resource manager log indicates that the application is done. However, some 
nodemanager logs indicates that the containers on this node are still in 
RUNNING state then enter KILLING state. They spend a long time (about 20s) in 
KILLING state before terminated.

In this case, 3 containers were still running after the app entered FINISHED 
state.
Below is the tail of RM and NM logs:

{panel:title=RM log}
2017-08-08 15:11:34,009 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: 
application_1502226348464_0002 State change from FINISHING to FINISHED
{panel}


{panel:title=NM log}
2017-08-08 15:11:51,277 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1502226348464_0002_01_03 transitioned from KILLING to 
EXITED_WITH_SUCCESS
2017-08-08 15:11:51,277 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1502226348464_0002_01_10 transitioned from KILLING to 
EXITED_WITH_SUCCESS
2017-08-08 15:11:51,277 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1502226348464_0002_01_16 transitioned from KILLING to 
EXITED_WITH_FAILURE
2017-08-08 15:11:51,309 INFO 
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=eddie
OPERATION=Container Finished - SucceededTARGET=ContainerImpl
RESULT=SUCCESS  APPID=application_1502226348464_0002
CONTAINERID=container_1502226348464_0002_01_03
2017-08-08 15:11:51,351 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1502226348464_0002_01_03 transitioned from 
EXITED_WITH_SUCCESS to DONE
2017-08-08 15:11:51,351 INFO 
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=eddie
OPERATION=Container Finished - SucceededTARGET=ContainerImpl
RESULT=SUCCESS  APPID=application_1502226348464_0002
CONTAINERID=container_1502226348464_0002_01_10
2017-08-08 15:11:51,351 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1502226348464_0002_01_10 transitioned from 
EXITED_WITH_SUCCESS to DONE
2017-08-08 15:11:51,357 WARN 
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=eddie
OPERATION=Container Finished - Failed   TARGET=ContainerImplRESULT=FAILURE  
DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE
APPID=application_1502226348464_0002
CONTAINERID=container_1502226348464_0002_01_16
2017-08-08 15:11:51,357 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1502226348464_0002_01_16 transitioned from 
EXITED_WITH_FAILURE to DONE
{panel}








--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6892) Improve API implementation in Resources and DominantResourceCalculator in align to ResourceInformation

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120173#comment-16120173
 ] 

Hadoop QA commented on YARN-6892:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
38s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
0s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 15 unchanged - 1 fixed = 16 total (was 16) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
28s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6892 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881034/YARN-6892-YARN-3926.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f9abe88ea8c4 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3926 / 1b586d7 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16804/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/16804/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
|  Test Results | 
https://builds.apac

[jira] [Updated] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-08-09 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-6610:
---
Attachment: YARN-6610.YARN-3926.002.patch

Now that YARN-6788 is in, here's a fresh patch that is significantly better 
optimized.

> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-6610.001.patch, YARN-6610.YARN-3926.002.patch
>
>
> The {{dominant}} param assumes there are only two resources, i.e. true means 
> to compare the dominant, and false means to compare the subordinate.  Now 
> that there are _n_ resources, this parameter no longer makes sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6953) Clean up ResourceUtils.setMinimumAllocationForMandatoryResources() and setMaximumAllocationForMandatoryResources()

2017-08-09 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120200#comment-16120200
 ] 

Manikandan R commented on YARN-6953:


[~sunilg] Attached WIP patch to make sure changes are in line with our 
discussion. Please review.

> Clean up ResourceUtils.setMinimumAllocationForMandatoryResources() and 
> setMaximumAllocationForMandatoryResources()
> --
>
> Key: YARN-6953
> URL: https://issues.apache.org/jira/browse/YARN-6953
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6953-YARN-3926.001.patch, 
> YARN-6953-YARN-3926.002.patch, YARN-6953-YARN-3926.003.patch, 
> YARN-6953-YARN-3926.004.patch
>
>
> The {{setMinimumAllocationForMandatoryResources()}} and 
> {{setMaximumAllocationForMandatoryResources()}} methods are quite convoluted. 
>  They'd be much simpler if they just handled CPU and memory manually instead 
> of trying to be clever about doing it in a loop.  There are also issues, such 
> as the log warning always talking about memory or the last element of the 
> inner array being a copy of the first element.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6953) Clean up ResourceUtils.setMinimumAllocationForMandatoryResources() and setMaximumAllocationForMandatoryResources()

2017-08-09 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-6953:
---
Attachment: YARN-6953-YARN-3926-WIP.patch

> Clean up ResourceUtils.setMinimumAllocationForMandatoryResources() and 
> setMaximumAllocationForMandatoryResources()
> --
>
> Key: YARN-6953
> URL: https://issues.apache.org/jira/browse/YARN-6953
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6953-YARN-3926.001.patch, 
> YARN-6953-YARN-3926.002.patch, YARN-6953-YARN-3926.003.patch, 
> YARN-6953-YARN-3926.004.patch, YARN-6953-YARN-3926-WIP.patch
>
>
> The {{setMinimumAllocationForMandatoryResources()}} and 
> {{setMaximumAllocationForMandatoryResources()}} methods are quite convoluted. 
>  They'd be much simpler if they just handled CPU and memory manually instead 
> of trying to be clever about doing it in a loop.  There are also issues, such 
> as the log warning always talking about memory or the last element of the 
> inner array being a copy of the first element.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6935) ResourceProfilesManagerImpl.parseResource() has no need of the key parameter

2017-08-09 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R reassigned YARN-6935:
--

Assignee: Manikandan R

> ResourceProfilesManagerImpl.parseResource() has no need of the key parameter
> 
>
> Key: YARN-6935
> URL: https://issues.apache.org/jira/browse/YARN-6935
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
>  Labels: newbie
> Attachments: YARN-6935-YARN-3926.001.patch
>
>
> The {{key}} parameter is the name of the resource profile being parsed, which 
> is irrelevant to parsing the {{value}} as a {{Resource}} and hence is unused. 
>  It should be removed, and {{value}} should be renamed to something more 
> descriptive.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6935) ResourceProfilesManagerImpl.parseResource() has no need of the key parameter

2017-08-09 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120209#comment-16120209
 ] 

Manikandan R commented on YARN-6935:


Attached patch for review.

> ResourceProfilesManagerImpl.parseResource() has no need of the key parameter
> 
>
> Key: YARN-6935
> URL: https://issues.apache.org/jira/browse/YARN-6935
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
>  Labels: newbie
> Attachments: YARN-6935-YARN-3926.001.patch
>
>
> The {{key}} parameter is the name of the resource profile being parsed, which 
> is irrelevant to parsing the {{value}} as a {{Resource}} and hence is unused. 
>  It should be removed, and {{value}} should be renamed to something more 
> descriptive.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6935) ResourceProfilesManagerImpl.parseResource() has no need of the key parameter

2017-08-09 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-6935:
---
Attachment: YARN-6935-YARN-3926.001.patch

> ResourceProfilesManagerImpl.parseResource() has no need of the key parameter
> 
>
> Key: YARN-6935
> URL: https://issues.apache.org/jira/browse/YARN-6935
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
>  Labels: newbie
> Attachments: YARN-6935-YARN-3926.001.patch
>
>
> The {{key}} parameter is the name of the resource profile being parsed, which 
> is irrelevant to parsing the {{value}} as a {{Resource}} and hence is unused. 
>  It should be removed, and {{value}} should be renamed to something more 
> descriptive.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6903) Yarn-native-service framework core rewrite

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120221#comment-16120221
 ] 

Hadoop QA commented on YARN-6903:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 60 new or modified test 
files. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
32s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
19s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
43s{color} | {color:green} yarn-native-services passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
12s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
31s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 50s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 3 new + 133 unchanged - 
5 fixed = 136 total (was 138) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 36s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 517 new + 1521 unchanged - 403 fixed = 2038 total (was 1924) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
25s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 26 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
17s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
44s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-slider in t

[jira] [Commented] (YARN-6323) Rolling upgrade/config change is broken on timeline v2.

2017-08-09 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120223#comment-16120223
 ] 

Rohith Sharma K S commented on YARN-6323:
-

bq. we can possibly write entities for running apps only to v1 and from new 
apps to v2 so we do not get incomplete app data for some apps from both v1 and 
v2.
this is very hard to enforce it from RM. RM can't differentiate between 
recovered apps and newly submitted apps. RM can write into time line server in 
non exclusive mode for some time period. 

> Rolling upgrade/config change is broken on timeline v2. 
> 
>
> Key: YARN-6323
> URL: https://issues.apache.org/jira/browse/YARN-6323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6323.001.patch
>
>
> Found this issue when deploying on real clusters. If there are apps running 
> when we enable timeline v2 (with work preserving restart enabled), node 
> managers will fail to start due to missing app context data. We should 
> probably assign some default names to these "left over" apps. I believe it's 
> suboptimal to let users clean up the whole cluster before enabling timeline 
> v2. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5146) [YARN-3368] Supports Fair Scheduler in new YARN UI

2017-08-09 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120265#comment-16120265
 ] 

Sunil G commented on YARN-5146:
---

Looks fine. I will commit later today if there are no objections.

> [YARN-3368] Supports Fair Scheduler in new YARN UI
> --
>
> Key: YARN-5146
> URL: https://issues.apache.org/jira/browse/YARN-5146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Abdullah Yousufi
> Attachments: YARN-5146.001.patch, YARN-5146.002.patch, 
> YARN-5146.003.patch, YARN-5146.004.patch
>
>
> Current implementation in branch YARN-3368 only support capacity scheduler,  
> we want to make it support fair scheduler. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6969) Remove method getMinShareMemoryFraction and getPendingContainers in class FairSchedulerQueueInfo

2017-08-09 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120266#comment-16120266
 ] 

Yufei Gu commented on YARN-6969:


Sure. Feel free to take it. Seems like you aren't a contributor. [~rkanter], 
can you add [~LarryLo] as a contributor? Thanks. 
[~LarryLo], once [~rkanter] added you as a contributor, you can assign it to 
yourself.

> Remove method getMinShareMemoryFraction and getPendingContainers in class 
> FairSchedulerQueueInfo
> 
>
> Key: YARN-6969
> URL: https://issues.apache.org/jira/browse/YARN-6969
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: fairscheduler
>Reporter: Yufei Gu
>Priority: Trivial
>  Labels: newbie++
>
> They are not used anymore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6971) Clean up different ways to create resources

2017-08-09 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu reassigned YARN-6971:
--

Assignee: Yufei Gu

> Clean up different ways to create resources
> ---
>
> Key: YARN-6971
> URL: https://issues.apache.org/jira/browse/YARN-6971
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Minor
>  Labels: newbie
>
> There are several ways to create a {{resource}} object, e.g., 
> BuilderUtils.newResource() and Resources.createResource(). These methods not 
> only cause confusing but also performance issues, for example 
> BuilderUtils.newResource() is significant slow than 
> Resources.createResource(). 
> We could merge them some how, and replace most BuilderUtils.newResource() 
> with Resources.createResource().



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-08-09 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-6610:
---
Attachment: YARN-6610.YARN-3926.003.patch

Realized that I had the sorts backwards.  New patch attached.

> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-6610.001.patch, YARN-6610.YARN-3926.002.patch, 
> YARN-6610.YARN-3926.003.patch
>
>
> The {{dominant}} param assumes there are only two resources, i.e. true means 
> to compare the dominant, and false means to compare the subordinate.  Now 
> that there are _n_ resources, this parameter no longer makes sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-09 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120278#comment-16120278
 ] 

Jason Lowe commented on YARN-6820:
--

bq. DEFAULT_TIMELINE_SERVICE_READ_ALLOWED_USERS should be star *. It is empty 
now.

I do not agree.  The whole point of this JIRA is to block all users from seeing 
the data in the ATS.  The feature already has a master enable that defaults to 
off, so by default all users can read the data.  If a user bothers to flip the 
master enable to on, it should not have zero effect by default.  IMHO once the 
master enable is turned on, it should only allow the configured YARN admins to 
read the data by default, and the config needs to be explicitly updated to 
allow any other users to read.  Therefore I believe an empty value for this 
default is correct.

Speaking of which, the following code can NPE:
{code}
  String adminAclListStr =
  conf.getInitParameter(YarnConfiguration.YARN_ADMIN_ACL);
  if (StringUtils.isEmpty(adminAclListStr)) {
adminAclList = new AccessControlList(
YarnConfiguration.DEFAULT_TIMELINE_SERVICE_READ_ALLOWED_USERS);
LOG.info("adminAclList not set, hence setting it to "
+ " YarnConfiguration.DEFAULT_TIMELINE_SERVICE_READ_ALLOWED_USERS");
  }
  adminAclList = new AccessControlList(adminAclListStr);
{code}
because adminAclListStr is always passed to AccessControlList and could be 
null.  It also doesn't make sense to log a message that references code symbols 
for property values since users won't be familiar with those.  We also 
shouldn't assume that the whitelist reader default makes a good admin default.  
Even if it wasn't empty, we shouldn't assume the default reader list should be 
a default admin list.  Therefore I think it should be simplified to something 
like:
{code}
  String adminAclListStr =
  conf.getInitParameter(YarnConfiguration.YARN_ADMIN_ACL);
  if (StringUtils.isEmpty(adminAclListStr)) {
adminAclListStr = "";
  }
  adminAclList = new AccessControlList(adminAclListStr);
{code}

Same comment applies to the code where we initialize the filter config.  We 
should explicitly set it to "" (or a static final String property specific to 
this filter that has that value) rather than assume default read allowed makes 
a good default admin value.

bq.  HttpServeletRequest#remoteuser will be always null when we access from 
browsers. I doubt that in normal browser, we get always AuthorizationException. 
How ever it is expected if user is not authenticated. But my doubt is should we 
get user from principle name?

RMWebServices gets the user name from the principal, and I think we would need 
to do the same here.

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120342#comment-16120342
 ] 

Hadoop QA commented on YARN-6610:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
48s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common 
generated 0 new + 4569 unchanged - 5 fixed = 4569 total (was 4574) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
21s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881048/YARN-6610.YARN-3926.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e4258d200d34 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3926 / 1b586d7 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16807/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16807/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Compon

[jira] [Commented] (YARN-6323) Rolling upgrade/config change is broken on timeline v2.

2017-08-09 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120346#comment-16120346
 ] 

Varun Saxena commented on YARN-6323:


bq. this is very hard to enforce it from RM. RM can't differentiate between 
recovered apps and newly submitted apps. 
Yeah, we will have to write code to ensure this happens i.e. store a flag in 
state store (non-existence of which indicates data being written to v1). Just 
wanted to point out another possibility if we wanted to ensure incomplete app 
data does not exist. 
However, as I said this approach has the drawback that we may lose data from v1 
if user decides to not take up v2 and its unlikely to be a user scenario too, 
so I do not suggest following this approach.

> Rolling upgrade/config change is broken on timeline v2. 
> 
>
> Key: YARN-6323
> URL: https://issues.apache.org/jira/browse/YARN-6323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6323.001.patch
>
>
> Found this issue when deploying on real clusters. If there are apps running 
> when we enable timeline v2 (with work preserving restart enabled), node 
> managers will fail to start due to missing app context data. We should 
> probably assign some default names to these "left over" apps. I believe it's 
> suboptimal to let users clean up the whole cluster before enabling timeline 
> v2. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6935) ResourceProfilesManagerImpl.parseResource() has no need of the key parameter

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120347#comment-16120347
 ] 

Hadoop QA commented on YARN-6935:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 1s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 3 unchanged - 0 fixed = 5 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 47s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6935 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881044/YARN-6935-YARN-3926.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3503048237d6 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3926 / 1b586d7 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16805/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16805/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16805/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-s

[jira] [Commented] (YARN-6033) Add support for sections in container-executor configuration file

2017-08-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120363#comment-16120363
 ] 

Wangda Tan commented on YARN-6033:
--

Committed to trunk, thanks [~vvasudev] and thanks reviews from 
[~miklos.szeg...@cloudera.com]/[~sunilg]! 

Backport to branch-2 blocked by YARN-6726.

> Add support for sections in container-executor configuration file
> -
>
> Key: YARN-6033
> URL: https://issues.apache.org/jira/browse/YARN-6033
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6033.003.patch, YARN-6033.004.patch, 
> YARN-6033.005.patch, YARN-6033.006.patch, YARN-6033.007.patch, 
> YARN-6033.008.patch, YARN-6033.009.patch, YARN-6033.010.patch, 
> YARN-6033.011.patch, YARN-6033.012.patch, YARN-6033.013.patch, 
> YARN-6033.014.patch, YARN-6033-YARN-5673.001.patch, 
> YARN-6033-YARN-5673.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6033) Add support for sections in container-executor configuration file

2017-08-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6033:
-
Fix Version/s: 3.0.0-beta1

> Add support for sections in container-executor configuration file
> -
>
> Key: YARN-6033
> URL: https://issues.apache.org/jira/browse/YARN-6033
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6033.003.patch, YARN-6033.004.patch, 
> YARN-6033.005.patch, YARN-6033.006.patch, YARN-6033.007.patch, 
> YARN-6033.008.patch, YARN-6033.009.patch, YARN-6033.010.patch, 
> YARN-6033.011.patch, YARN-6033.012.patch, YARN-6033.013.patch, 
> YARN-6033.014.patch, YARN-6033-YARN-5673.001.patch, 
> YARN-6033-YARN-5673.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-09 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120399#comment-16120399
 ] 

Vrushali C commented on YARN-6820:
--

Thanks [~jlowe] and [~rohithsharma] for the reviews. Will upload an updated 
patch today.

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6033) Add support for sections in container-executor configuration file

2017-08-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120407#comment-16120407
 ] 

Hudson commented on YARN-6033:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12155 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12155/])
YARN-6033. Add support for sections in container-executor configuration 
(wangda: rev ec694145cf9c0ade7606813871ca2a4a371def8e)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/resources/test/test-configurations/old-config.cfg
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_main.cc
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/resources/test/test-configurations/configuration-1.cfg
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_util.cc
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.h
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_configuration.cc
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/resources/test/test-configurations/configuration-2.cfg


> Add support for sections in container-executor configuration file
> -
>
> Key: YARN-6033
> URL: https://issues.apache.org/jira/browse/YARN-6033
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6033.003.patch, YARN-6033.004.patch, 
> YARN-6033.005.patch, YARN-6033.006.patch, YARN-6033.007.patch, 
> YARN-6033.008.patch, YARN-6033.009.patch, YARN-6033.010.patch, 
> YARN-6033.011.patch, YARN-6033.012.patch, YARN-6033.013.patch, 
> YARN-6033.014.patch, YARN-6033-YARN-5673.001.patch, 
> YARN-6033-YARN-5673.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6917) Queue path is recomputed from scratch on every allocation

2017-08-09 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120417#comment-16120417
 ] 

Jason Lowe commented on YARN-6917:
--

Thanks for the patch, Eric!  I agree with the checkstyle nit: the new queuePath 
field should be private.  Other than that I think it looks good.  Agree that 
this is an optimization and the testing should be covered by existing 
reconfiguration tests.

> Queue path is recomputed from scratch on every allocation
> -
>
> Key: YARN-6917
> URL: https://issues.apache.org/jira/browse/YARN-6917
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Affects Versions: 2.8.1
>Reporter: Jason Lowe
>Assignee: Eric Payne
>Priority: Minor
> Attachments: YARN-6917.001.patch
>
>
> As part of the discussion in YARN-6901 I noticed that we are recomputing a 
> queue's path for every allocation.  Currently getting the queue's path 
> involves calling getQueuePath on the parent then building onto that string 
> with the basename of the queue.  In turn the parent's getQueuePath method 
> does the same, so we end up spending time recomputing a string that will 
> never change until a reconfiguration.
> Ideally the queue path should be computed once during queue initialization 
> rather than on-demand.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6550) Capture launch_container.sh logs

2017-08-09 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120554#comment-16120554
 ] 

Suma Shivaprasad commented on YARN-6550:


[~aw] Thanks for checking on the bash version portability.  With the updated 
patch to use command groups, there is a UT failure that exposed some unexpected 

The UT - TestContainerLaunch.testInvalidSyntax checks for failures being 
propagated from a bunch of invalid commands through ShellCommandExecutor. 

Behaviour without the patch

Below is the script that the UT executes 

{noformat}
#!/bin/bash

export 
APPLICATION_WORKFLOW_CONTEXT="{"workflowId":"609f91c5cd83","workflowName":"

insert table
partition (cd_education_status)
select cd_demo_sk, cd_gender, "
exec /bin/bash -c ""
hadoop_shell_errorcode=$?
if [ $hadoop_shell_errorcode -ne 0 ]
then
  exit $hadoop_shell_errorcode
fi
{noformat}

Expected error

*/tmp/unit_test_pass.sh: line 5: insert: command not found*
/tmp/unit_test_pass.sh: line 6: syntax error near unexpected token 
`cd_education_status'
/tmp/unit_test_pass.sh: line 6: `partition (cd_education_status)'


Behaviour with the patch

Script that the UT executes with the patch (command groups)
--
{noformat}
{
echo "Setting up env variables"
export 
APPLICATION_WORKFLOW_CONTEXT=""workflowId":"609f91c5cd83","workflowName":"

insert table 
partition (cd_education_status)
select cd_demo_sk, cd_gender, "
echo "Setting up job resources"
echo "Launching container"
} 1> >(tee -a "${STDOUT}" >&1) 2> >(tee -a "${STDERR}" >&2)   # Note that 
redirection doesnt matter. Having a command group causes it.
exec /bin/bash -c ""
hadoop_shell_errorcode=$?
if [ $hadoop_shell_errorcode -ne 0 ]
then
  exit $hadoop_shell_errorcode
fi
{noformat}

Error

/tmp/unit_test_fail.sh: line 6: syntax error near unexpected token 
`cd_education_status'
/tmp/unit_test_fail.sh: line 6: `partition (cd_education_status)'

Please note that the "insert table" command being an invalid command is not 
even thrown.

Given the above issues, I was exploring other ways of achieving redirection 
without doing it per line. Using exec with redirection seems like a more 
concise way to achieve this - http://tldp.org/LDP/abs/html/x17974.html
I also tested the above UT script with exec and it works fine. If you dont have 
any objections, will update the patch to use exec with redirection instead.

{noformat}
export STDOUT="/tmp/1.out"
export STDERR="/tmp/1.err"

exec 1> >(tee -a "${STDOUT}" >&1) 2> >(tee -a "${STDERR}" >&2)
export 
APPLICATION_WORKFLOW_CONTEXT="{"workflowId":"609f91c5cd83","workflowName":"

insert table
partition (cd_education_status)
select cd_demo_sk, cd_gender, "
exec /bin/bash -c ""
hadoop_shell_errorcode=$?
if [ $hadoop_shell_errorcode -ne 0 ]
then
  exit $hadoop_shell_errorcode
fi
{noformat}

Error
==
*/tmp/unit_test_exec.sh: line 9: insert: command not found*
/tmp/unit_test_exec.sh: line 10: syntax error near unexpected token 
`cd_education_status'
/tmp/unit_test_exec.sh: line 10: `partition (cd_education_status)'

> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6852) [YARN-6223] Native code changes to support isolate GPU devices by using CGroups

2017-08-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6852:
-
Attachment: YARN-6852.005.patch

Thanks for comments from [~sunil.gov...@gmail.com], attached 005 patch, 
addressed all comments except #4, since it is a little bit out of the scope. I 
prefer to do it once we have more requirements of cgroup.

> [YARN-6223] Native code changes to support isolate GPU devices by using 
> CGroups
> ---
>
> Key: YARN-6852
> URL: https://issues.apache.org/jira/browse/YARN-6852
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6852.001.patch, YARN-6852.002.patch, 
> YARN-6852.003.patch, YARN-6852.004.patch, YARN-6852.005.patch
>
>
> This JIRA plan to add support of:
> 1) Isolation in CGroups. (native side).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6550) Capture launch_container.sh logs

2017-08-09 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120571#comment-16120571
 ] 

Allen Wittenauer commented on YARN-6550:


are you sure your output is correct?  I'm seeing missing curly braces between 
the two.  That error would indicate that other quotes (or other bits) are 
missing too.

Also:

{code}
#!/bin/bash
{code}

Not portable.

{code}
if [ $hadoop_shell_errorcode -ne 0 ]
{code}

use [[ and quote the variable.


> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5736) YARN container executor config does not handle white space

2017-08-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120578#comment-16120578
 ] 

Wangda Tan commented on YARN-5736:
--

[~dan...@cloudera.com], [~miklos.szeg...@cloudera.com], 
[~shaneku...@gmail.com], 

I think this patch should be backport to branch-2 as well, is there any concern 
of doing this?

> YARN container executor config does not handle white space
> --
>
> Key: YARN-5736
> URL: https://issues.apache.org/jira/browse/YARN-5736
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Trivial
>  Labels: oct16-medium
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN_5736.000.patch, YARN-5736.001.patch, 
> YARN-5736.002.patch, YARN-5736.addendum.000.patch
>
>
> The container executor configuration reader does not handle white spaces or 
> malformed key value pairs in the config file correctly or gracefully
> as an example the following key value line which is part of the configuration 
> (note the << is used as a marker to show the extra trailing space):
> yarn.nodemanager.linux-container-executor.group=yarn <<
> is a valid line but when you run the check over the file:
> [root@test]#./container-executor --checksetup
> Can't get group information for yarn - Success.
> [root@test]#
> It fails to find the yarn group but it really tries to find the "yarn " group 
> which fails. There is no trimming anywhere while processing the lines. If a 
> space would be added in before or after the = sign a failure would also occur.
> Minor nit is the fact that a failure still is logged as a Success



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-09 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6820:
-
Attachment: YARN-6820-YARN-5355.004.patch

Uploading v004. 

Updates are:
- Using empty string "" for initializing Admin ACL list if YARN_ADMIN_ACL is 
not set
- Using the Principal in HttpServletRequest to create the UGI instead of the 
remote user in the HttpServletRequest
- updated unit tests to conform to the above changes

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch, 
> YARN-6820-YARN-5355.004.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-09 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6820:
-
Attachment: (was: YARN-6820-YARN-5355.004.patch)

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-09 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6820:
-
Attachment: YARN-6820-YARN-5355.004.patch

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch, 
> YARN-6820-YARN-5355.004.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6903) Yarn-native-service framework core rewrite

2017-08-09 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6903:
--
Attachment: YARN-6903.yarn-native-services.05.patch

> Yarn-native-service framework core rewrite
> --
>
> Key: YARN-6903
> URL: https://issues.apache.org/jira/browse/YARN-6903
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6903.yarn-native-services.01.patch, 
> YARN-6903.yarn-native-services.02.patch, 
> YARN-6903.yarn-native-services.03.patch, 
> YARN-6903.yarn-native-services.04.patch, 
> YARN-6903.yarn-native-services.05.patch
>
>
> There are some new features like rich placement scheduling, container auto 
> restart, container upgrade in YARN core that can be taken advantage by the 
> native-service framework. Besides, there are quite a lot legacy code which 
> are no longer required. 
> So we decide to rewrite the core part to have a leaner codebase and make use 
> of various advanced features in YARN. 
> And the new code design will be in align with what we have designed for the 
> service API YARN-4793



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6789) new api to get all supported resources from RM

2017-08-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120636#comment-16120636
 ] 

Wangda Tan commented on YARN-6789:
--

After a offline discussion with [~sunilg], I think we discovered more issues 
for unit in Resource object. 

The "unit" creates several issues:
- Behavior of the branch is: if unit of a given resource information is not 
set, default unit which configured in resource-type.cfg will be used to 
initialize containers ({{Resource.newInstance}}); and unit will be untouched 
for PB record initialization. ({{ResourcePBImpl(ResourceProto proto)}})
- However, if we have AM runs with old code (which doesn't have YARN-3926 
logics), it will send resource PB record to RM without unit on the wire. So 
YARN RM thinks the coming memory value with empty UNIT (which is B). This is an 
incompatible behavior. 
- Secondly, as I commented above, "unit" inside ResourceTypeInfo is very 
confusing: a. it is not minimum unit. b. it is not default unit, since it won't 
affect "default unit" inside AM, it is just default unit inside RM, which AM 
should not be interested. c. it is not a "suggested/preferred unit", because it 
doesn't make sense as well.
- In addition, It creates performance issue as well since all Resource 
operations need convert to the same unit.

My personal preference is completely removing unit from ResourceInformation. 
And unit of ResourceType means unit of given resource type. For example, 
resource.types.memory.unit = MB. It will be majorly used for UI displaying. 
Units of known resource types including vcores/memory will be hard coded in 
code and cannot changed by setting configuration file, this is majorly for 
backward-compatibility. We can provide unit converter as client library for 
AM/Client to use, Resource-related classes should not directly use it.

Thoughts?

> new api to get all supported resources from RM
> --
>
> Key: YARN-6789
> URL: https://issues.apache.org/jira/browse/YARN-6789
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-6789-YARN-3926.001.patch
>
>
> It will be better to provide an api to get all supported resource types from 
> RM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6550) Capture launch_container.sh logs

2017-08-09 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120645#comment-16120645
 ] 

Suma Shivaprasad commented on YARN-6550:


Yes output is correct. You mean the curl braces between highlighted in bold 
below? That didnt matter. I was trying out different things and would have 
missed it. Also was executing this with "bash" explicitly so the #!/bin/bash 
wouldnt have affect it.

bash /tmp/unit_test_fail.sh
/tmp/unit_test_fail.sh: line 11: syntax error near unexpected token 
`cd_education_status'
/tmp/unit_test_fail.sh: line 11: `partition (cd_education_status)'

{noformat}
#!/bin/bash

export STDOUT="/tmp/1.out"
export STDERR="/tmp/1.err"

{
echo "Setting up env variables"
export 
APPLICATION_WORKFLOW_CONTEXT="*{*"workflowId":"609f91c5cd83","workflowName":"

insert table
partition (cd_education_status)
select cd_demo_sk, cd_gender, "
echo "Setting up job resources"
echo "Launching container"
} 1> >(tee -a "${STDOUT}" >&1) 2> >(tee -a "${STDERR}" >&2)
exec /bin/bash -c ""
hadoop_shell_errorcode=$?
if [[ "$hadoop_shell_errorcode" -ne 0 ]]
then
  exit $hadoop_shell_errorcode
fi
{noformat}

> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6874) TestHBaseStorageFlowRun.testWriteFlowRunMinMax fails intermittently

2017-08-09 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6874:
-
Attachment: YARN-6874-YARN-5355.0001.patch

Thanks [~varun_saxena]. yes I think if two writes happen within the same 
millisecond for the min start time, the second one will overwrite the other. 
Which is exactly why we are supplementing the timestamp for metric writes in 
the flow run table.

I am attaching a very simple patch that modifies the ColumnHelper constructor 
call to include "true" for the flag that indicates the use of supplemented 
timestamp while storing.

The effect of this will be that for columns min start time and max end time of 
the flow, the supplemented timestamp will be used correctly. It will also be 
invoked for the flow version column store, so the biggest timestamp value will 
be fetched when we query for the flow version. The effect is the same as 
different apps writing the flow version.

Uploading v001.


> TestHBaseStorageFlowRun.testWriteFlowRunMinMax fails intermittently
> ---
>
> Key: YARN-6874
> URL: https://issues.apache.org/jira/browse/YARN-6874
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Vrushali C
> Attachments: YARN-6874-YARN-5355.0001.patch
>
>
> {noformat}
> testWriteFlowRunMinMax(org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun)
>   Time elapsed: 0.088 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<142502690> but was:<1425026901000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun.testWriteFlowRunMinMax(TestHBaseStorageFlowRun.java:237)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6413) Decouple Yarn Registry API from ZK

2017-08-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120653#comment-16120653
 ] 

Jian He commented on YARN-6413:
---

Looks like the ApplicationServiceRecordKey etc. is still using the appId,  as 
said before, that will be in conflict with what exists today ? 

> Decouple Yarn Registry API from ZK
> --
>
> Key: YARN-6413
> URL: https://issues.apache.org/jira/browse/YARN-6413
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: amrmproxy, api, resourcemanager
>Reporter: Ellen Hui
>Assignee: Ellen Hui
> Attachments: 0001-Registry-API-v2.patch, 0002-Registry-API-v2.patch, 
> 0003-Registry-API-api-only.patch
>
>
> Right now the Yarn Registry API (defined in the RegistryOperations interface) 
> is a very thin layer over Zookeeper. This jira proposes changing the 
> interface to abstract away the implementation details so that we can write a 
> FS-based implementation of the registry service, which will be used to 
> support AMRMProxy HA.
> The new interface will use register/delete/resolve APIs instead of 
> Zookeeper-specific operations like mknode. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6413) Decouple Yarn Registry API from ZK

2017-08-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120653#comment-16120653
 ] 

Jian He edited comment on YARN-6413 at 8/9/17 9:06 PM:
---

Looks like the ApplicationServiceRecordKey etc. is still using the appId, and 
ContainerServiceRecordKey is using ContainerId,  as said before, that will be 
in conflict with what exists today ? 
It's not clear to me how the service record interface will be used by the 
current code, will it be the same as previous patch ? 


was (Author: jianhe):
Looks like the ApplicationServiceRecordKey etc. is still using the appId,  as 
said before, that will be in conflict with what exists today ? 

> Decouple Yarn Registry API from ZK
> --
>
> Key: YARN-6413
> URL: https://issues.apache.org/jira/browse/YARN-6413
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: amrmproxy, api, resourcemanager
>Reporter: Ellen Hui
>Assignee: Ellen Hui
> Attachments: 0001-Registry-API-v2.patch, 0002-Registry-API-v2.patch, 
> 0003-Registry-API-api-only.patch
>
>
> Right now the Yarn Registry API (defined in the RegistryOperations interface) 
> is a very thin layer over Zookeeper. This jira proposes changing the 
> interface to abstract away the implementation details so that we can write a 
> FS-based implementation of the registry service, which will be used to 
> support AMRMProxy HA.
> The new interface will use register/delete/resolve APIs instead of 
> Zookeeper-specific operations like mknode. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6977) Node information is not provided for non am containers in RM logs

2017-08-09 Thread Sumana Sathish (JIRA)
Sumana Sathish created YARN-6977:


 Summary: Node information is not provided for non am containers in 
RM logs
 Key: YARN-6977
 URL: https://issues.apache.org/jira/browse/YARN-6977
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Sumana Sathish


There is no information on which node non am container is being assigned in the 
trunk for 3.0
Earlier we used to have logs for non am container in the similar way
{code}
Assigned container container_ of capacity  on host 
, which has 1 containers,  used and  available after allocation
{code}

3.0 has information for am container alone in the following way
{code}
Done launching container Container: [ContainerId: container_, 
AllocationRequestId: 0, Version: 0, NodeId:nodeID, NodeHttpAddress: 
nodeAddress, Resource: , Priority: 0, Token: Token { 
kind: ContainerToken, service: service}, ExecutionType: GUARANTEED, ] for AM 
appattempt_
{code}

Can we please have similar message for Non am container too ??



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6977) Node information is not provided for non am containers in RM logs

2017-08-09 Thread Sumana Sathish (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumana Sathish updated YARN-6977:
-
Description: 
There is no information on which node non am container is being assigned in the 
trunk for hadoop 3.0
Earlier we used to have logs for non am container in the similar way
{code}
Assigned container container_ of capacity  on host 
, which has 1 containers,  used and  available after allocation
{code}

3.0 has information for am container alone in the following way
{code}
Done launching container Container: [ContainerId: container_, 
AllocationRequestId: 0, Version: 0, NodeId:nodeID, NodeHttpAddress: 
nodeAddress, Resource: , Priority: 0, Token: Token { 
kind: ContainerToken, service: service}, ExecutionType: GUARANTEED, ] for AM 
appattempt_
{code}

Can we please have similar message for Non am container too ??

  was:
There is no information on which node non am container is being assigned in the 
trunk for 3.0
Earlier we used to have logs for non am container in the similar way
{code}
Assigned container container_ of capacity  on host 
, which has 1 containers,  used and  available after allocation
{code}

3.0 has information for am container alone in the following way
{code}
Done launching container Container: [ContainerId: container_, 
AllocationRequestId: 0, Version: 0, NodeId:nodeID, NodeHttpAddress: 
nodeAddress, Resource: , Priority: 0, Token: Token { 
kind: ContainerToken, service: service}, ExecutionType: GUARANTEED, ] for AM 
appattempt_
{code}

Can we please have similar message for Non am container too ??


> Node information is not provided for non am containers in RM logs
> -
>
> Key: YARN-6977
> URL: https://issues.apache.org/jira/browse/YARN-6977
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Sumana Sathish
>  Labels: newbie
>
> There is no information on which node non am container is being assigned in 
> the trunk for hadoop 3.0
> Earlier we used to have logs for non am container in the similar way
> {code}
> Assigned container container_ of capacity  on host 
> , which has 1 containers,  used and 
>  available after allocation
> {code}
> 3.0 has information for am container alone in the following way
> {code}
> Done launching container Container: [ContainerId: container_, 
> AllocationRequestId: 0, Version: 0, NodeId:nodeID, NodeHttpAddress: 
> nodeAddress, Resource: , Priority: 0, Token: Token { 
> kind: ContainerToken, service: service}, ExecutionType: GUARANTEED, ] for AM 
> appattempt_
> {code}
> Can we please have similar message for Non am container too ??



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6977) Node information is not provided for non am containers in RM logs

2017-08-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6977:
-
Component/s: capacity scheduler

> Node information is not provided for non am containers in RM logs
> -
>
> Key: YARN-6977
> URL: https://issues.apache.org/jira/browse/YARN-6977
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Sumana Sathish
>  Labels: newbie
>
> There is no information on which node non am container is being assigned in 
> the trunk for hadoop 3.0
> Earlier we used to have logs for non am container in the similar way
> {code}
> Assigned container container_ of capacity  on host 
> , which has 1 containers,  used and 
>  available after allocation
> {code}
> 3.0 has information for am container alone in the following way
> {code}
> Done launching container Container: [ContainerId: container_, 
> AllocationRequestId: 0, Version: 0, NodeId:nodeID, NodeHttpAddress: 
> nodeAddress, Resource: , Priority: 0, Token: Token { 
> kind: ContainerToken, service: service}, ExecutionType: GUARANTEED, ] for AM 
> appattempt_
> {code}
> Can we please have similar message for Non am container too ??



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6977) Node information is not provided for non am containers in RM logs

2017-08-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6977:
-
Labels: newbie  (was: )

> Node information is not provided for non am containers in RM logs
> -
>
> Key: YARN-6977
> URL: https://issues.apache.org/jira/browse/YARN-6977
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Sumana Sathish
>  Labels: newbie
>
> There is no information on which node non am container is being assigned in 
> the trunk for hadoop 3.0
> Earlier we used to have logs for non am container in the similar way
> {code}
> Assigned container container_ of capacity  on host 
> , which has 1 containers,  used and 
>  available after allocation
> {code}
> 3.0 has information for am container alone in the following way
> {code}
> Done launching container Container: [ContainerId: container_, 
> AllocationRequestId: 0, Version: 0, NodeId:nodeID, NodeHttpAddress: 
> nodeAddress, Resource: , Priority: 0, Token: Token { 
> kind: ContainerToken, service: service}, ExecutionType: GUARANTEED, ] for AM 
> appattempt_
> {code}
> Can we please have similar message for Non am container too ??



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120671#comment-16120671
 ] 

Hadoop QA commented on YARN-6820:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 2s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
14s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-timelineservice in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6820 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881068/YARN-6820-YARN-5355.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 49ea4f1c0c93 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 3088cfc |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/16809/artifact/patchprocess/whitespace-eol.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/16809/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16809/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-ya

[jira] [Commented] (YARN-6874) TestHBaseStorageFlowRun.testWriteFlowRunMinMax fails intermittently

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120673#comment-16120673
 ] 

Hadoop QA commented on YARN-6874:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
15s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6874 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881085/YARN-6874-YARN-5355.0001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 93b87cf6e718 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 3088cfc |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16811/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16811/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestHBaseStorageFlowRun.testWriteFlowRunMinMax fails intermittently
> ---
>
> Key: YARN-6874
> URL: https://issues.apache.org/jira/browse/YARN-6874
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>R

[jira] [Commented] (YARN-6905) Multiple test failures due to FastNumberFormat

2017-08-09 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120680#comment-16120680
 ] 

Vrushali C commented on YARN-6905:
--

So the ApplicationId.toString is being invoked in AppIdKeyConverter#decode 
function which is in the hadoop-yarn-server-timelineservice-hbase module. This 
module depends on hadoop-yarn-api as well as hadoop-common. So I think moving 
the FastNumberFormat from hadoop-common to  hadoop-yarn-api  may not help. 

I think timeline service would need to override the ApplicationId.toString 
internally. Or, although I think this won't be very likable, hadoop-yarn-api 
can provide a ApplicationId#toStringSlowImpl function (or some such named 
function) in ApplicationId itself which keeps the old code instead of the 
changes to ApplicationId #toString() in YARN-6768.

Unfortunately, we could see more classpath conflicts as trunk keeps evolving, 
till timeline service on trunk can be based on the hbase version which is based 
on hadoop trunk. 

> Multiple test failures due to FastNumberFormat
> --
>
> Key: YARN-6905
> URL: https://issues.apache.org/jira/browse/YARN-6905
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 3.0.0-beta1
> Environment: Ubuntu 14.04 
> x86, ppc64le
> $ java -version
> openjdk version "1.8.0_111"
> OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14)
> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>Reporter: Sonia Garudi
>Assignee: Haibo Chen
>
> There are multiple test failing in Hadoop YARN Timeline Service HBase tests 
> project with the following error :
> {code}
> java.lang.NoClassDefFoundError: org/apache/hadoop/util/FastNumberFormat
> at 
> org.apache.hadoop.yarn.api.records.ApplicationId.toString(ApplicationId.java:104)
> {code}
> Below are the failing tests :
> {code}
>   TestHBaseTimelineStorageApps.testWriteApplicationToHBase
>   TestHBaseTimelineStorageApps.testEvents
>   TestHBaseTimelineStorageEntities.testEventsEscapeTs
>   TestHBaseTimelineStorageEntities.testWriteEntityToHBase
>   TestHBaseTimelineStorageEntities.testEventsWithEmptyInfo
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-09 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120682#comment-16120682
 ] 

Vrushali C commented on YARN-6820:
--

Fixing the 1 whitespace and incorrect param name javadoc issue. 

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch, 
> YARN-6820-YARN-5355.004.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-09 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6820:
-
Attachment: YARN-6820-YARN-5355.005.patch

Uploading v005 that has the following changes as per review:

- Using empty string "" for initializing Admin ACL list if YARN_ADMIN_ACL is 
not set
- Using the Principal in HttpServletRequest to create the UGI instead of the 
remote user in the HttpServletRequest
- updated unit tests to conform to the above changes
- fixed the whitespace & javadoc warning in last jenkins report 


> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch, 
> YARN-6820-YARN-5355.004.patch, YARN-6820-YARN-5355.005.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6736) Consider writing to both ats v1 & v2 from RM for smoother upgrades

2017-08-09 Thread Aaron Gresch (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120696#comment-16120696
 ] 

Aaron Gresch commented on YARN-6736:


Is this a dupe of YARN-4368?

We would like to run both services in parallel for a time.  Rather than having 
an "upgrade" mode, I think it would be cleaner to specify the versions in a 
list as mentioned.  I was working on a similar solution to create a publisher 
class that basically took a collection of timeline services to publish to based 
on the versions specified.  

I made a similar change locally and an issue I had was getting my single node 
setup running with both services.  ATS v1 and v2 wanted to use the same port.  
I ended up creating a new conf port setting for v2 that defaults back to the v1 
port if not found.



> Consider writing to both ats v1 & v2 from RM for smoother upgrades
> --
>
> Key: YARN-6736
> URL: https://issues.apache.org/jira/browse/YARN-6736
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Rohith Sharma K S
> Attachments: YARN-6736-YARN-5355.001.patch
>
>
> When the cluster is being upgraded from atsv1 to v2, it may be good to have a 
> brief time period during which RM writes to both atsv1 and v2. This will help 
> frameworks like Tez migrate more smoothly. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6550) Capture launch_container.sh logs

2017-08-09 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120727#comment-16120727
 ] 

Allen Wittenauer commented on YARN-6550:


bq.  Also was executing this with "bash" explicitly so the #!/bin/bash wouldnt 
have affect it.

I'm aware.  I'm just pointing out that it's Yet Another Bug in the 
nodemanager's code.

Also, use your toolset:

{code}
$ shellcheck /tmp/container_launch.sh

In /tmp/container_launch.sh line 6:
{
^-- SC1009: The mentioned parser error was in this brace group.


In /tmp/container_launch.sh line 11:
partition (cd_education_status)
^-- SC1073: Couldn't parse this function.
   ^-- SC1065: Trying to declare parameters? Don't. Use () and refer to 
params as $1, $2..


In /tmp/container_launch.sh line 12:
select cd_demo_sk, cd_gender, "
^-- SC1064: Expected a { to open the function definition.
^-- SC1072:  Fix any mentioned problems and try again.

{code}

> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6550) Capture launch_container.sh logs

2017-08-09 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120728#comment-16120728
 ] 

Allen Wittenauer commented on YARN-6550:


BTW, be aware that in sh, {} forces a subshell.  which means it's going to get 
interpreted first

> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6323) Rolling upgrade/config change is broken on timeline v2.

2017-08-09 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120730#comment-16120730
 ] 

Vrushali C commented on YARN-6323:
--

So this jira YARN-6323 is not for data inconsistencies. It is for dealing with 
NM startup failure. If you bring up an NM with atsv2 enabled on a node which 
has an app that has been running from before atsv2 was turned on, then NM will 
not be able to recover the flow context for this app, since the flow context 
never existed before. 

Related jira was YARN-6555 in which [~rohithsharma] added the work preserving 
flow context storage and retrieval on the NM. 

To explain this jira a bit more:
In the patch on YARN-6555 
https://issues.apache.org/jira/secure/attachment/12869901/YARN-6555.003.patch

at line 386 in ContainerManagerImpl , if the p.getFlowContext() != null then we 
create the Flow Context correctly and pass it in as an argument to  
ApplicationImpl on line 393. But if it is null (when it does not exist), then 
null FlowContext will be passed to ApplicationImpl and ApplicationImpl 
constructor will throw new IllegalArgumentException("flow context cannot be 
null");



> Rolling upgrade/config change is broken on timeline v2. 
> 
>
> Key: YARN-6323
> URL: https://issues.apache.org/jira/browse/YARN-6323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6323.001.patch
>
>
> Found this issue when deploying on real clusters. If there are apps running 
> when we enable timeline v2 (with work preserving restart enabled), node 
> managers will fail to start due to missing app context data. We should 
> probably assign some default names to these "left over" apps. I believe it's 
> suboptimal to let users clean up the whole cluster before enabling timeline 
> v2. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >