[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-07-20 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095839#comment-16095839
 ] 

Vrushali C commented on YARN-6820:
--

thanks [~jlowe]! Yes, this whitelist would be an interim step that will provide 
some level of blanket authorization. We will be working out a proper solution 
towards authorization next but was thinking of getting this into the beta 
release so that the community could consider starting the integration of their 
systems with it and thereby give us more feedback.  Thanks once again! 

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>  Labels: yarn-5355-merge-blocker
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6850) Ensure that supplemented timestamp is stored only for flow run metrics

2017-07-20 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095697#comment-16095697
 ] 

Vrushali C commented on YARN-6850:
--

If we don't want to update the documentation here, could you add in a note in 
the documentation jira so that we don't miss this. 

> Ensure that supplemented timestamp is stored only for flow run metrics
> --
>
> Key: YARN-6850
> URL: https://issues.apache.org/jira/browse/YARN-6850
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6850-YARN-5355.01.patch
>
>
> In timeline service v2,  ColumnHelper#getPutTimestamp supplements the 
> timestamp and is called by ColumnHelper#store. This is not conditional and 
> called for every put.
> We need to ensure that the cell timestamps for metrics in entity and 
> application (and sub application) tables are "correct" timestamps since we 
> will be enabling TTLs for these cells. 
> The supplemented timestamp is to be used only in the flow run table by the 
> coprocessor which intercepts all reads & writes to cells in this table. It 
> looks at the supplemented timestamp to figure out which app id this 
> particular cell belongs to. This is done in order to ensure no collision 
> occurs when two apps belonging to same flow run write the same metric at the 
> same timestamp. 
> Discovered in the discussion in YARN-4455 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6850) Ensure that supplemented timestamp is stored only for flow run metrics

2017-07-20 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095696#comment-16095696
 ] 

Vrushali C edited comment on YARN-6850 at 7/21/17 2:42 AM:
---

Thanks [~varun_saxena] for the patch. Yes, I think we want to add in something 
in the documentation perhaps as part of this jira? That mentions that when we 
move from alpha1 to beta, the existing timeseries metrics may not be 
retrievable. 

I had a question unrelated to this patch but I am seeing it now. 
Why do we have this check?

{code} if (tsBegin != 0 || tsEnd != Long.MAX_VALUE) { {code} 

In case someone wants all versions for this metric, how would they do it 
without knowing the boundary? It's not so much of a problem for users querying 
manually but when scripts call such queries, some times they put in min as 0 
and max as long,max as boundaries in order to fetch everything. Do we want to 
not allow this.. (just wondering what the thought process was).



was (Author: vrushalic):
Thanks [~varun_saxena] for the patch. Yes, I think we want to add in something 
in the documentation perhaps as part of this jira? That mentions that when we 
move from alpha1 to beta, the existing timeseries metrics may not be 
retrievable. 

I had a question unrelated to this patch but I am seeing it now. 
Why do we have this check?

bq  if (tsBegin != 0 || tsEnd != Long.MAX_VALUE) {

In case someone wants all versions for this metric, how would they do it 
without knowing the boundary? It's not so much of a problem for users querying 
manually but when scripts call such queries, some times they put in min as 0 
and max as long,max as boundaries in order to fetch everything. Do we want to 
not allow this.. (just wondering what the thought process was).


> Ensure that supplemented timestamp is stored only for flow run metrics
> --
>
> Key: YARN-6850
> URL: https://issues.apache.org/jira/browse/YARN-6850
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6850-YARN-5355.01.patch
>
>
> In timeline service v2,  ColumnHelper#getPutTimestamp supplements the 
> timestamp and is called by ColumnHelper#store. This is not conditional and 
> called for every put.
> We need to ensure that the cell timestamps for metrics in entity and 
> application (and sub application) tables are "correct" timestamps since we 
> will be enabling TTLs for these cells. 
> The supplemented timestamp is to be used only in the flow run table by the 
> coprocessor which intercepts all reads & writes to cells in this table. It 
> looks at the supplemented timestamp to figure out which app id this 
> particular cell belongs to. This is done in order to ensure no collision 
> occurs when two apps belonging to same flow run write the same metric at the 
> same timestamp. 
> Discovered in the discussion in YARN-4455 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6850) Ensure that supplemented timestamp is stored only for flow run metrics

2017-07-20 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095696#comment-16095696
 ] 

Vrushali C commented on YARN-6850:
--

Thanks [~varun_saxena] for the patch. Yes, I think we want to add in something 
in the documentation perhaps as part of this jira? That mentions that when we 
move from alpha1 to beta, the existing timeseries metrics may not be 
retrievable. 

I had a question unrelated to this patch but I am seeing it now. 
Why do we have this check?

bq  if (tsBegin != 0 || tsEnd != Long.MAX_VALUE) {

In case someone wants all versions for this metric, how would they do it 
without knowing the boundary? It's not so much of a problem for users querying 
manually but when scripts call such queries, some times they put in min as 0 
and max as long,max as boundaries in order to fetch everything. Do we want to 
not allow this.. (just wondering what the thought process was).


> Ensure that supplemented timestamp is stored only for flow run metrics
> --
>
> Key: YARN-6850
> URL: https://issues.apache.org/jira/browse/YARN-6850
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6850-YARN-5355.01.patch
>
>
> In timeline service v2,  ColumnHelper#getPutTimestamp supplements the 
> timestamp and is called by ColumnHelper#store. This is not conditional and 
> called for every put.
> We need to ensure that the cell timestamps for metrics in entity and 
> application (and sub application) tables are "correct" timestamps since we 
> will be enabling TTLs for these cells. 
> The supplemented timestamp is to be used only in the flow run table by the 
> coprocessor which intercepts all reads & writes to cells in this table. It 
> looks at the supplemented timestamp to figure out which app id this 
> particular cell belongs to. This is done in order to ensure no collision 
> occurs when two apps belonging to same flow run write the same metric at the 
> same timestamp. 
> Discovered in the discussion in YARN-4455 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6804) Allow custom hostname for docker containers in native services

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095688#comment-16095688
 ] 

Hadoop QA commented on YARN-6804:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
54s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
1s{color} | {color:green} hadoop-yarn-project_hadoop-yarn generated 0 new + 109 
unchanged - 2 fixed = 109 total (was 111) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 58 unchanged - 0 fixed = 59 total (was 58) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
59s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6804 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878288/YARN-6804-trunk.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  cc  |
| uname | Linux 5e5c4ffe5da7 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 44350fd |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.o

[jira] [Commented] (YARN-6323) Rolling upgrade/config change is broken on timeline v2.

2017-07-20 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095682#comment-16095682
 ] 

Vrushali C commented on YARN-6323:
--

Ping on this jira. To summarize:

- new NM fails to recover apps since the timeline flow context is missing for 
old apps on the NM. This patch will put in a default flow context to help NM 
proceed. 

To answer Rohith's questions:

bq Application is NOT submitted with tags. So default values are created by 
YARN.
RM creates default FlowContext with FlowName as appName. On NM restart, we are 
creating FlowContex with appId. So, there will be a inconsistencies when 
entities are published during rolling upgrade.
Yes, inconsistencies would be there but it is not possible to upgrade the RM 
and the all the NMs at exactly the time, unless we take a downtime. 

bq. Assume that Application is submitted with some tags. RM recover the 
application and start publishing with tags as flow context. Again there is 
inconsistencies in published entity.
Yes, but how to synchronize RM and NM across restarts? We could use app id in 
both cases but this turns out to be strange default data.   

This patch will ensure the NM does not fail to start up.  I thought of adding 
in some default values for dropping the data but that will be an expensive 
check to do each time we want to write to the backend. 

ping [~rohithsharma] [~varun_saxena] [~haibo.chen]  any other ideas? At the 
very least, the NM can't be crashing during an upgrade due to missing flow 
context. 


> Rolling upgrade/config change is broken on timeline v2. 
> 
>
> Key: YARN-6323
> URL: https://issues.apache.org/jira/browse/YARN-6323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6323.001.patch
>
>
> Found this issue when deploying on real clusters. If there are apps running 
> when we enable timeline v2 (with work preserving restart enabled), node 
> managers will fail to start due to missing app context data. We should 
> probably assign some default names to these "left over" apps. I believe it's 
> suboptimal to let users clean up the whole cluster before enabling timeline 
> v2. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6733) Add table for storing sub-application entities

2017-07-20 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6733:
-
Attachment: YARN-6733-YARN-5355.007.patch

Rebased the earlier 006 patch. Uploading v007. 

> Add table for storing sub-application entities
> --
>
> Key: YARN-6733
> URL: https://issues.apache.org/jira/browse/YARN-6733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: IMG_7040.JPG, YARN-6733-YARN-5355.001.patch, 
> YARN-6733-YARN-5355.002.patch, YARN-6733-YARN-5355.003.patch, 
> YARN-6733-YARN-5355.004.patch, YARN-6733-YARN-5355.005.patch, 
> YARN-6733-YARN-5355.006.patch, YARN-6733-YARN-5355.007.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information.
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> This jira tracks the code changes needed for  table schema creation.
> I will file other jiras for writing to that table, updating the user name 
> fields to include sub-application user etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6804) Allow custom hostname for docker containers in native services

2017-07-20 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-6804:
-
Attachment: YARN-6804-yarn-native-services.004.patch

I separated this into two patches (004), one with the NM changes that will 
apply to trunk and the other with yarn-native-services changes that will apply 
on top of the trunk patch.

> Allow custom hostname for docker containers in native services
> --
>
> Key: YARN-6804
> URL: https://issues.apache.org/jira/browse/YARN-6804
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-6804-trunk.004.patch, 
> YARN-6804-yarn-native-services.001.patch, 
> YARN-6804-yarn-native-services.002.patch, 
> YARN-6804-yarn-native-services.003.patch, 
> YARN-6804-yarn-native-services.004.patch
>
>
> Instead of the default random docker container hostname, we could set a more 
> user-friendly hostname for the container. The default could be a hostname 
> based on the container ID, with an option for the AM to provide a different 
> hostname. In the case of the native services AM, we could provide the 
> hostname that would be created by the registry DNS server. Regardless of 
> whether or not registry DNS is enabled, this would be a more useful hostname 
> for the docker container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6804) Allow custom hostname for docker containers in native services

2017-07-20 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-6804:
-
Attachment: YARN-6804-trunk.004.patch

> Allow custom hostname for docker containers in native services
> --
>
> Key: YARN-6804
> URL: https://issues.apache.org/jira/browse/YARN-6804
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-6804-trunk.004.patch, 
> YARN-6804-yarn-native-services.001.patch, 
> YARN-6804-yarn-native-services.002.patch, 
> YARN-6804-yarn-native-services.003.patch, 
> YARN-6804-yarn-native-services.004.patch
>
>
> Instead of the default random docker container hostname, we could set a more 
> user-friendly hostname for the container. The default could be a hostname 
> based on the container ID, with an option for the AM to provide a different 
> hostname. In the case of the native services AM, we could provide the 
> hostname that would be created by the registry DNS server. Regardless of 
> whether or not registry DNS is enabled, this would be a more useful hostname 
> for the docker container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6837) Null LocalResource visibility or resource type can crash the nodemanager

2017-07-20 Thread Jinjiang Ling (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095591#comment-16095591
 ] 

Jinjiang Ling commented on YARN-6837:
-

[~jlowe], thanks for your review.

> Null LocalResource visibility or resource type can crash the nodemanager
> 
>
> Key: YARN-6837
> URL: https://issues.apache.org/jira/browse/YARN-6837
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: YARN-6837-1.patch, YARN-6837.patch
>
>
> When I write an yarn application, I create a LocalResource like this
> {quote}
> LocalResource resource = Records.newRecord(LocalResource.class);
> {quote}
> Because I forget to set the visibilty of it, so the job is failed when I 
> submit it.
> But NodeManager shutdown one by one at the same time, and there is 
> NullPointerExceptionin NodeManager's log:
> {quote}
> 2017-07-18 17:54:09,289 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hadoop   
> IP=10.43.156.177OPERATION=Start Container Request   
> TARGET=ContainerManageImpl  RESULT=SUCCESS  
> APPID=application_1499221670783_0067
> CONTAINERID=container_1499221670783_0067_02_03
> 2017-07-18 17:54:09,292 FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: 
> Error in dispatcher thread
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceSet.addResources(ResourceSet.java:84)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl$RequestResourcesTransition.transition(ContainerImpl.java:868)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl$RequestResourcesTransition.transition(ContainerImpl.java:819)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:1684)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:96)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:1418)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:1411)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
> at java.lang.Thread.run(Thread.java:745)
> 2017-07-18 17:54:09,292 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Start request for container_1499221670783_0067_02_02 by user hadoop
> {quote}
> Then I change my code and still set the visibility to null
> {quote}
> LocalResource resource = LocalResource.newInstance(
> URL.fromURI(dst.toUri()),
> LocalResourceType.FILE, 
> {color:red}null{color},
> fileStatus.getLen(), 
> fileStatus.getModificationTime());
> {quote}
> This error still happen.
> At last I set the visibility to correct value, the error do not happen again.
> So I think the visibility of LocalResource is null will cause NodeManager 
> shutdown.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6733) Add table for storing sub-application entities

2017-07-20 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095578#comment-16095578
 ] 

Vrushali C commented on YARN-6733:
--

Looks like I need to rebase my patch to the latest branch state. Will do so 
shortly 

> Add table for storing sub-application entities
> --
>
> Key: YARN-6733
> URL: https://issues.apache.org/jira/browse/YARN-6733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: IMG_7040.JPG, YARN-6733-YARN-5355.001.patch, 
> YARN-6733-YARN-5355.002.patch, YARN-6733-YARN-5355.003.patch, 
> YARN-6733-YARN-5355.004.patch, YARN-6733-YARN-5355.005.patch, 
> YARN-6733-YARN-5355.006.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information.
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> This jira tracks the code changes needed for  table schema creation.
> I will file other jiras for writing to that table, updating the user name 
> fields to include sub-application user etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object

2017-07-20 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095566#comment-16095566
 ] 

Konstantinos Karanasos commented on YARN-6593:
--

Thanks for the feedback, [~leftnoteasy] and [~chris.douglas]!

I will address all your comments and upload a new version.

bq. I'm not sure when you plan to use 
SingleConstraintTransformer/SpecializedConstraintTransformer. If you agree that 
they should not be user-facing, do you think we should move them to a separate 
JIRA while doing implementation?
I agree they should not be user-facing -- I will fix the annotations.
The transformations will, for example, be used at the scheduler side, if the 
scheduler wants to do something with specific cardinality or target 
constraints. I would suggest to keep it at this JIRA so that we have all these 
related changes clubbed together.

It seems that the problem of updating constraints would require more 
discussion. Since this patch is already introducing a lot of new stuff, I would 
also suggest to push the update to future work, after we discuss what the exact 
semantics of updating constraints are. A first solution to the problem would 
be, like you say, to invalidate an existing scheduling request and send a new 
one.



> [API] Introduce Placement Constraint object
> ---
>
> Key: YARN-6593
> URL: https://issues.apache.org/jira/browse/YARN-6593
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6593.001.patch, YARN-6593.002.patch, 
> YARN-6593.003.patch, YARN-6593.004.patch
>
>
> This JIRA introduces an object for defining placement constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6733) Add table for storing sub-application entities

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095563#comment-16095563
 ] 

Hadoop QA commented on YARN-6733:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-6733 does not apply to YARN-5355. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6733 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878277/YARN-6733-YARN-5355.006.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16509/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add table for storing sub-application entities
> --
>
> Key: YARN-6733
> URL: https://issues.apache.org/jira/browse/YARN-6733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: IMG_7040.JPG, YARN-6733-YARN-5355.001.patch, 
> YARN-6733-YARN-5355.002.patch, YARN-6733-YARN-5355.003.patch, 
> YARN-6733-YARN-5355.004.patch, YARN-6733-YARN-5355.005.patch, 
> YARN-6733-YARN-5355.006.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information.
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> This jira tracks the code changes needed for  table schema creation.
> I will file other jiras for writing to that table, updating the user name 
> fields to include sub-application user etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6733) Add table for storing sub-application entities

2017-07-20 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6733:
-
Attachment: YARN-6733-YARN-5355.006.patch

Attaching v006 that addresses all the documentation recommendations 

> Add table for storing sub-application entities
> --
>
> Key: YARN-6733
> URL: https://issues.apache.org/jira/browse/YARN-6733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: IMG_7040.JPG, YARN-6733-YARN-5355.001.patch, 
> YARN-6733-YARN-5355.002.patch, YARN-6733-YARN-5355.003.patch, 
> YARN-6733-YARN-5355.004.patch, YARN-6733-YARN-5355.005.patch, 
> YARN-6733-YARN-5355.006.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information.
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> This jira tracks the code changes needed for  table schema creation.
> I will file other jiras for writing to that table, updating the user name 
> fields to include sub-application user etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6853) Add MySql Scripts for FederationStateStore

2017-07-20 Thread Giovanni Matteo Fumarola (JIRA)
Giovanni Matteo Fumarola created YARN-6853:
--

 Summary: Add MySql Scripts for FederationStateStore
 Key: YARN-6853
 URL: https://issues.apache.org/jira/browse/YARN-6853
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Giovanni Matteo Fumarola


In YARN-3663 we added the SQL scripts for SQLServer. We want to add the MySql 
scripts to be able to run Federation with a MySQL servers which will be less 
performant but convenient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3254) HealthReport should include disk full information

2017-07-20 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095484#comment-16095484
 ] 

Suma Shivaprasad commented on YARN-3254:


[~sunilg] Can you pls review the patch?

> HealthReport should include disk full information
> -
>
> Key: YARN-3254
> URL: https://issues.apache.org/jira/browse/YARN-3254
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Akira Ajisaka
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: Screen Shot 2015-02-24 at 17.57.39.png, Screen Shot 
> 2015-02-25 at 14.38.10.png, YARN-3254-001.patch, YARN-3254-002.patch, 
> YARN-3254-003.patch
>
>
> When a NodeManager's local disk gets almost full, the NodeManager sends a 
> health report to ResourceManager that "local/log dir is bad" and the message 
> is displayed on ResourceManager Web UI. It's difficult for users to detect 
> why the dir is bad.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6768) Improve performance of yarn api record toString and fromString

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095480#comment-16095480
 ] 

Hadoop QA commented on YARN-6768:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  2s{color} | {color:orange} root: The patch generated 10 new + 28 unchanged 
- 11 fixed = 38 total (was 39) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 45s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.test.TestLambdaTestUtils |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6768 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878243/YARN-6768.6.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 72b83375df60 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c8df366 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16507/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16507/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16507/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api U: . |
| Console output | 
htt

[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object

2017-07-20 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095440#comment-16095440
 ] 

Wangda Tan commented on YARN-6593:
--

bq. Do you have specific use cases in mind?
Regarding to LRA, change of cardinality is one of the most important 
requirement, for example, increase #HTTP-instances for load balancing.

bq. I don't have a clear definition of validity across placement requests ...
All valid concerns to me. I don't think we should spend a lot of time to solve 
a problem may be nonexistent.

bq. If the workarounds (like submitting a new application or a new set of 
constraints) are easier to understand  ...
Regarding to the new set of constraint, do you mean user need to cancel the 
first set of constraints and submit new constraints with different 
allocation-id? Considering ResourceRequest problem I mentioned in YARN-6594, 
making SchedulingRequest which sent to scheduler to be immutable might be a 
good idea. (User can still cancel it, but cannot replace it with new requests). 
What's your thoughts here? 

> [API] Introduce Placement Constraint object
> ---
>
> Key: YARN-6593
> URL: https://issues.apache.org/jira/browse/YARN-6593
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6593.001.patch, YARN-6593.002.patch, 
> YARN-6593.003.patch, YARN-6593.004.patch
>
>
> This JIRA introduces an object for defining placement constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6033) Add support for sections in container-executor configuration file

2017-07-20 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095405#comment-16095405
 ] 

Wangda Tan commented on YARN-6033:
--

[~vvasudev], 

While doing test for YARN-6223 (based on this patch). I think there's a minor 
change needed: 

In "container-executor.c", it has following statement:
{code}
executor_cfg = *(get_configuration_section("", &CFG)));
{code}

However if we have a config file doesn't has key/value under default section, 
such as an empty config file, or file like:
{code}
[named-section]
k=v
{code}

The returned value of {{get_configuration_section("", &CFG)}} will be NULL 
which will cause process crash without proper error message.

To fix the problem, we can:
1) Set an empty/NULL value to executor_cfg. Probably we should change 
executor_cfg to a pointer.
2) By default create an empty value (not null) section in struct configuration.

Thoughts?

> Add support for sections in container-executor configuration file
> -
>
> Key: YARN-6033
> URL: https://issues.apache.org/jira/browse/YARN-6033
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-6033.003.patch, YARN-6033.004.patch, 
> YARN-6033-YARN-5673.001.patch, YARN-6033-YARN-5673.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6223) [Umbrella] Natively support GPU configuration/discovery/scheduling/isolation on YARN

2017-07-20 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095391#comment-16095391
 ] 

Wangda Tan commented on YARN-6223:
--

[~chris.douglas], I just split native patch to YARN-6852 and included summaries 
of what the patch is doing and how the patch will be used. Please share your 
thoughts.

Thanks!

> [Umbrella] Natively support GPU configuration/discovery/scheduling/isolation 
> on YARN
> 
>
> Key: YARN-6223
> URL: https://issues.apache.org/jira/browse/YARN-6223
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6223.Natively-support-GPU-on-YARN-v1.pdf, 
> YARN-6223.wip.1.patch, YARN-6223.wip.2.patch, YARN-6223.wip.3.patch
>
>
> As varieties of workloads are moving to YARN, including machine learning / 
> deep learning which can speed up by leveraging GPU computation power. 
> Workloads should be able to request GPU from YARN as simple as CPU and memory.
> *To make a complete GPU story, we should support following pieces:*
> 1) GPU discovery/configuration: Admin can either config GPU resources and 
> architectures on each node, or more advanced, NodeManager can automatically 
> discover GPU resources and architectures and report to ResourceManager 
> 2) GPU scheduling: YARN scheduler should account GPU as a resource type just 
> like CPU and memory.
> 3) GPU isolation/monitoring: once launch a task with GPU resources, 
> NodeManager should properly isolate and monitor task's resource usage.
> For #2, YARN-3926 can support it natively. For #3, YARN-3611 has introduced 
> an extensible framework to support isolation for different resource types and 
> different runtimes.
> *Related JIRAs:*
> There're a couple of JIRAs (YARN-4122/YARN-5517) filed with similar goals but 
> different solutions:
> For scheduling:
> - YARN-4122/YARN-5517 are all adding a new GPU resource type to Resource 
> protocol instead of leveraging YARN-3926.
> For isolation:
> - And YARN-4122 proposed to use CGroups to do isolation which cannot solve 
> the problem listed at 
> https://github.com/NVIDIA/nvidia-docker/wiki/GPU-isolation#challenges such as 
> minor device number mapping; load nvidia_uvm module; mismatch of CUDA/driver 
> versions, etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6852) [YARN-6223] Native code changes to support isolate GPU devices by using CGroups

2017-07-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6852:
-
Attachment: YARN-6852.001.patch

Attached YARN-6852.001 patch for review.

Thanks [~chris.douglas]/[~vinodkv]/[~sunilg]/[~subru]/[~sidharta-s]/[~vvasudev] 
for their inputs. 

*For the ver.001 patch, it added:*
1) "Module" concept to container-executor binary. Which we can easier manage 
code and decouple their functionalities. In the future, as described in 
YARN-5673, we can use dlopen, etc. to dynamically load modules for even better 
code isolation.

2) "common-module": This is some common logics which will be shared by 
different modules (located in impl/modules/common), for example, check if 
module is enabled or not.

3) "cgroups-module": Modify devices subcomponent in cgroup needs root 
permission (see \[1\]). So we have to move some funtionalities from Java to C 
side. The cgroups-module is added for this purpose. (located in 
impl/modules/cgroups). Please note that to avoid security issues, cgroups 
module doesn't have any user-invokable interface. So "yarn" user cannot use 
"container-executor" to do any cgroups related operations directly.
Configs of cgroups module see below. Please note that it doesn't include a knob 
to enable or disable it since it is not directly invoked by user (using CLI).

4) "gpu-module": Do gpu isolation. User can call the gpu operation by using 
following commandlines to block container accessing GPUs.
{code}
container-executor gpu --excluded_gpus=0,1 --container_id=container_x_y_z
{code}
We will do strict checks for container_id to make sure malicious user won't use 
this tool to block GPU devices of cgroups not owned by a YARN launched 
container.

*For configs, this patch is based on YARN-6033 (not committed yet). So configs 
of different modules are placed under different sections, like following:*
{code}
## Original container-executor configs ##
yarn.nodemanager.linux-container-executor.group=#configured value of 
yarn.nodemanager.linux-container-executor.group
banned.users=#comma separated list of users who can not run applications
min.user.id=1000#Prevent other super-users
allowed.system.users=##comma separated list of system users who CAN run 
applications
#

[cgroups]
# Root of system cgroups (Cannot be empty or "/")
root=/sys/fs/cgroups
# Parent folder of YARN's CGroups
yarn-hierarchy=yarn (Cannot be empty)

[gpu]
# Enable / disable the module
module.enabled=true/false 
# Major device number of GPU, by default is 195
gpu.major-device-number=195
# Allowed minor device numbers, empty means all GPU devices managed by YARN.
gpu.allowed-device-minor-numbers=0,1,2,3
{code}

\[1\]: 
https://lists.linuxfoundation.org/pipermail/containers/2012-November/030840.html
 

> [YARN-6223] Native code changes to support isolate GPU devices by using 
> CGroups
> ---
>
> Key: YARN-6852
> URL: https://issues.apache.org/jira/browse/YARN-6852
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6852.001.patch
>
>
> This JIRA plan to add support of:
> 1) Isolation in CGroups. (native side).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-07-20 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095369#comment-16095369
 ] 

Jian He commented on YARN-6130:
---

btw, what is the existing  AppCollectorData#rmIdentifier used for ?

> [ATSv2 Security] Generate a delegation token for AM when app collector is 
> created and pass it to AM via NM and RM
> -
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch, 
> YARN-6130-YARN-5355.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-07-20 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095366#comment-16095366
 ] 

Jian He commented on YARN-6130:
---

minor comment:
The AllocateResponse#newInstance method may be not needed. I think if we have 
the Builder pattern, we don’t need to keep on adding newInstance methods 
anymore 

> [ATSv2 Security] Generate a delegation token for AM when app collector is 
> created and pass it to AM via NM and RM
> -
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch, 
> YARN-6130-YARN-5355.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6620) [YARN-6223] NM Java side code changes to support isolate GPU devices by using CGroups

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095358#comment-16095358
 ] 

Hadoop QA commented on YARN-6620:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} YARN-6620 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6620 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878246/YARN-6620.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16508/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-6223] NM Java side code changes to support isolate GPU devices by using 
> CGroups
> -
>
> Key: YARN-6620
> URL: https://issues.apache.org/jira/browse/YARN-6620
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6620.001.patch
>
>
> This JIRA plan to add support of:
> 1) GPU configuration for NodeManagers
> 2) Isolation in CGroups. (Java side).
> 3) NM restart and recovery allocated GPU devices



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6620) [YARN-6223] NM Java side code changes to support isolate GPU devices by using CGroups

2017-07-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6620:
-
Attachment: YARN-6620.001.patch

Attached ver.001 patch for review.

> [YARN-6223] NM Java side code changes to support isolate GPU devices by using 
> CGroups
> -
>
> Key: YARN-6620
> URL: https://issues.apache.org/jira/browse/YARN-6620
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6620.001.patch
>
>
> This JIRA plan to add support of:
> 1) GPU configuration for NodeManagers
> 2) Isolation in CGroups. (Java side).
> 3) NM restart and recovery allocated GPU devices



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6852) [YARN-6223] Native code changes to support isolate GPU devices by using CGroups

2017-07-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6852:
-
Description: 
This JIRA plan to add support of:
1) Isolation in CGroups. (native side).

> [YARN-6223] Native code changes to support isolate GPU devices by using 
> CGroups
> ---
>
> Key: YARN-6852
> URL: https://issues.apache.org/jira/browse/YARN-6852
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> This JIRA plan to add support of:
> 1) Isolation in CGroups. (native side).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6620) [YARN-6223] NM Java side code changes to support isolate GPU devices by using CGroups

2017-07-20 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095348#comment-16095348
 ] 

Wangda Tan commented on YARN-6620:
--

(Deleted originally attached patch since it is out-of-date and confusing).

> [YARN-6223] NM Java side code changes to support isolate GPU devices by using 
> CGroups
> -
>
> Key: YARN-6620
> URL: https://issues.apache.org/jira/browse/YARN-6620
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> This JIRA plan to add support of:
> 1) GPU configuration for NodeManagers
> 2) Isolation in CGroups. (Java side).
> 3) NM restart and recovery allocated GPU devices



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6620) [YARN-6223] NM Java side code changes to support isolate GPU devices by using CGroups

2017-07-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6620:
-
Description: 
This JIRA plan to add support of:
1) GPU configuration for NodeManagers
2) Isolation in CGroups. (Java side).
3) NM restart and recovery allocated GPU devices


  was:
This JIRA plan to add support of:
1) GPU configuration for NodeManagers
2) Isolation in CGroups.

This is the minimal requirements to support GPU on YARN.


> [YARN-6223] NM Java side code changes to support isolate GPU devices by using 
> CGroups
> -
>
> Key: YARN-6620
> URL: https://issues.apache.org/jira/browse/YARN-6620
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> This JIRA plan to add support of:
> 1) GPU configuration for NodeManagers
> 2) Isolation in CGroups. (Java side).
> 3) NM restart and recovery allocated GPU devices



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6620) [YARN-6223] NM Java side code changes to support isolate GPU devices by using CGroups

2017-07-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6620:
-
Summary: [YARN-6223] NM Java side code changes to support isolate GPU 
devices by using CGroups  (was: [YARN-6223] Support GPU Configuration and 
Isolation in CGroups.)

> [YARN-6223] NM Java side code changes to support isolate GPU devices by using 
> CGroups
> -
>
> Key: YARN-6620
> URL: https://issues.apache.org/jira/browse/YARN-6620
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> This JIRA plan to add support of:
> 1) GPU configuration for NodeManagers
> 2) Isolation in CGroups.
> This is the minimal requirements to support GPU on YARN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6620) [YARN-6223] NM Java side code changes to support isolate GPU devices by using CGroups

2017-07-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6620:
-
Attachment: (was: YARN-6620.001.patch)

> [YARN-6223] NM Java side code changes to support isolate GPU devices by using 
> CGroups
> -
>
> Key: YARN-6620
> URL: https://issues.apache.org/jira/browse/YARN-6620
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> This JIRA plan to add support of:
> 1) GPU configuration for NodeManagers
> 2) Isolation in CGroups. (Java side).
> 3) NM restart and recovery allocated GPU devices



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6852) [YARN-6223] Native code changes to support isolate GPU devices by using CGroups

2017-07-20 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-6852:


 Summary: [YARN-6223] Native code changes to support isolate GPU 
devices by using CGroups
 Key: YARN-6852
 URL: https://issues.apache.org/jira/browse/YARN-6852
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wangda Tan
Assignee: Wangda Tan






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4006) YARN ATS Alternate Kerberos HTTP Authentication Changes

2017-07-20 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-4006:
-
Target Version/s: 2.9.0  (was: 2.8.2)

> YARN ATS Alternate Kerberos HTTP Authentication Changes
> ---
>
> Key: YARN-4006
> URL: https://issues.apache.org/jira/browse/YARN-4006
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, timelineserver
>Affects Versions: 2.5.0, 2.6.0, 2.7.0, 2.5.1, 2.6.1, 2.8.0, 2.7.1, 2.7.2
>Reporter: Greg Senia
>Priority: Blocker
> Attachments: sample-ats-alt-auth.patch, YARN-4006-branch2.6.0.patch, 
> YARN-4006-branch-trunk.patch
>
>
> When attempting to use The Hadoop Alternate Authentication Classes. They do 
> not exactly work with what was built with YARN-1935.
> I went ahead and made the following changes to support using a Custom 
> AltKerberos DelegationToken custom class.
> Changes to: TimelineAuthenticationFilterInitializer.class
> {code}
>String authType = filterConfig.get(AuthenticationFilter.AUTH_TYPE);
> LOG.info("AuthType Configured: "+authType);
> if (authType.equals(PseudoAuthenticationHandler.TYPE)) {
>   filterConfig.put(AuthenticationFilter.AUTH_TYPE,
>   PseudoDelegationTokenAuthenticationHandler.class.getName());
> LOG.info("AuthType: PseudoDelegationTokenAuthenticationHandler");
> } else if (authType.equals(KerberosAuthenticationHandler.TYPE) || 
> (UserGroupInformation.isSecurityEnabled() && 
> conf.get("hadoop.security.authentication").equals(KerberosAuthenticationHandler.TYPE)))
>  {
>   if (!(authType.equals(KerberosAuthenticationHandler.TYPE))) {
> filterConfig.put(AuthenticationFilter.AUTH_TYPE,
>   authType);
> LOG.info("AuthType: "+authType);
>   } else {
> filterConfig.put(AuthenticationFilter.AUTH_TYPE,
>   KerberosDelegationTokenAuthenticationHandler.class.getName());
> LOG.info("AuthType: KerberosDelegationTokenAuthenticationHandler");
>   } 
>   // Resolve _HOST into bind address
>   String bindAddress = conf.get(HttpServer2.BIND_ADDRESS);
>   String principal =
>   filterConfig.get(KerberosAuthenticationHandler.PRINCIPAL);
>   if (principal != null) {
> try {
>   principal = SecurityUtil.getServerPrincipal(principal, bindAddress);
> } catch (IOException ex) {
>   throw new RuntimeException(
>   "Could not resolve Kerberos principal name: " + ex.toString(), 
> ex);
> }
> filterConfig.put(KerberosAuthenticationHandler.PRINCIPAL,
> principal);
>   }
> }
>  {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4006) YARN ATS Alternate Kerberos HTTP Authentication Changes

2017-07-20 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095320#comment-16095320
 ] 

Junping Du commented on YARN-4006:
--

Move it out of 2.8.2 given no activity for over a year on this JIRA.

> YARN ATS Alternate Kerberos HTTP Authentication Changes
> ---
>
> Key: YARN-4006
> URL: https://issues.apache.org/jira/browse/YARN-4006
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, timelineserver
>Affects Versions: 2.5.0, 2.6.0, 2.7.0, 2.5.1, 2.6.1, 2.8.0, 2.7.1, 2.7.2
>Reporter: Greg Senia
>Priority: Blocker
> Attachments: sample-ats-alt-auth.patch, YARN-4006-branch2.6.0.patch, 
> YARN-4006-branch-trunk.patch
>
>
> When attempting to use The Hadoop Alternate Authentication Classes. They do 
> not exactly work with what was built with YARN-1935.
> I went ahead and made the following changes to support using a Custom 
> AltKerberos DelegationToken custom class.
> Changes to: TimelineAuthenticationFilterInitializer.class
> {code}
>String authType = filterConfig.get(AuthenticationFilter.AUTH_TYPE);
> LOG.info("AuthType Configured: "+authType);
> if (authType.equals(PseudoAuthenticationHandler.TYPE)) {
>   filterConfig.put(AuthenticationFilter.AUTH_TYPE,
>   PseudoDelegationTokenAuthenticationHandler.class.getName());
> LOG.info("AuthType: PseudoDelegationTokenAuthenticationHandler");
> } else if (authType.equals(KerberosAuthenticationHandler.TYPE) || 
> (UserGroupInformation.isSecurityEnabled() && 
> conf.get("hadoop.security.authentication").equals(KerberosAuthenticationHandler.TYPE)))
>  {
>   if (!(authType.equals(KerberosAuthenticationHandler.TYPE))) {
> filterConfig.put(AuthenticationFilter.AUTH_TYPE,
>   authType);
> LOG.info("AuthType: "+authType);
>   } else {
> filterConfig.put(AuthenticationFilter.AUTH_TYPE,
>   KerberosDelegationTokenAuthenticationHandler.class.getName());
> LOG.info("AuthType: KerberosDelegationTokenAuthenticationHandler");
>   } 
>   // Resolve _HOST into bind address
>   String bindAddress = conf.get(HttpServer2.BIND_ADDRESS);
>   String principal =
>   filterConfig.get(KerberosAuthenticationHandler.PRINCIPAL);
>   if (principal != null) {
> try {
>   principal = SecurityUtil.getServerPrincipal(principal, bindAddress);
> } catch (IOException ex) {
>   throw new RuntimeException(
>   "Could not resolve Kerberos principal name: " + ex.toString(), 
> ex);
> }
> filterConfig.put(KerberosAuthenticationHandler.PRINCIPAL,
> principal);
>   }
> }
>  {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6768) Improve performance of yarn api record toString and fromString

2017-07-20 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated YARN-6768:
--
Attachment: YARN-6768.6.patch

> Improve performance of yarn api record toString and fromString
> --
>
> Key: YARN-6768
> URL: https://issues.apache.org/jira/browse/YARN-6768
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Attachments: YARN-6768.1.patch, YARN-6768.2.patch, YARN-6768.3.patch, 
> YARN-6768.4.patch, YARN-6768.5.patch, YARN-6768.6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-07-20 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095298#comment-16095298
 ] 

Jason Lowe commented on YARN-6820:
--

A simple whitelist of users that are allowed to use the readers and thus access 
all ATSv2 data would probably let us deploy ATSv2 in a limited way.  We could 
deploy ATSv2 alongside our existing ATSv1.5 and have the producers write to 
_both_ ATSv1.5 and ATSv2.  We would whitelist our admin users for ATSv2 but 
still direct all our regular users to UIs based on ATSv1.5.  So it would allow 
us to deploy ATSv2, but only admins would see any benefits.  Our normal users 
cannot have unlimited access to the data within ATSv2, since one user could 
access metadata (counters, configs, etc.) for another user's job.  Therefore a 
whitelist option isn't something we can live with if we want normal users to be 
able to access ATSv2.  Until there's a "real" ACL solution for ATSv2 we would 
be forced to continue deploying ATSv1.5 so our users can still access the Tez 
UI, YARN history server, etc.

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>  Labels: yarn-5355-merge-blocker
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6768) Improve performance of yarn api record toString and fromString

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095273#comment-16095273
 ] 

Hadoop QA commented on YARN-6768:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 19s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api: 
The patch generated 4 new + 28 unchanged - 11 fixed = 32 total (was 39) {color} 
|
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api 
generated 4 new + 123 unchanged - 0 fixed = 127 total (was 123) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 20s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6768 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878234/YARN-6768.5.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a3e67ebf93fb 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c8df366 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/16506/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/16506/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/16506/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16506/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
| mvnsite | 
https://builds.apache.org

[jira] [Commented] (YARN-6733) Add table for storing sub-application entities

2017-07-20 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095232#comment-16095232
 ] 

Vrushali C commented on YARN-6733:
--

Will update the documentation. Varun and I discussed during today's call and 
said let's keep the flow version column in this table. I will check the 
documentation once again to ensure we have it mentioned. 



> Add table for storing sub-application entities
> --
>
> Key: YARN-6733
> URL: https://issues.apache.org/jira/browse/YARN-6733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: IMG_7040.JPG, YARN-6733-YARN-5355.001.patch, 
> YARN-6733-YARN-5355.002.patch, YARN-6733-YARN-5355.003.patch, 
> YARN-6733-YARN-5355.004.patch, YARN-6733-YARN-5355.005.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information.
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> This jira tracks the code changes needed for  table schema creation.
> I will file other jiras for writing to that table, updating the user name 
> fields to include sub-application user etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6779) DominantResourceFairnessPolicy.DominantResourceFairnessComparator.calculateShares() should be @VisibleForTesting

2017-07-20 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton reassigned YARN-6779:
--

Assignee: Laura Torres

> DominantResourceFairnessPolicy.DominantResourceFairnessComparator.calculateShares()
>  should be @VisibleForTesting
> 
>
> Key: YARN-6779
> URL: https://issues.apache.org/jira/browse/YARN-6779
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.1, 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Laura Torres
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-6779-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6768) Improve performance of yarn api record toString and fromString

2017-07-20 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated YARN-6768:
--
Attachment: YARN-6768.5.patch

> Improve performance of yarn api record toString and fromString
> --
>
> Key: YARN-6768
> URL: https://issues.apache.org/jira/browse/YARN-6768
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Attachments: YARN-6768.1.patch, YARN-6768.2.patch, YARN-6768.3.patch, 
> YARN-6768.4.patch, YARN-6768.5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6726) Fix issues with docker commands executed by container-executor

2017-07-20 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095227#comment-16095227
 ] 

Chris Douglas commented on YARN-6726:
-

Not sure if I'll have cycles to review the patch in detail, but quickly:

bq. No user input is used, so this should be safe.
We also need to prevent the {{yarn}} user from becoming root, so we can't trust 
input to the CE even if it's filled in by the NM during container launch

> Fix issues with docker commands executed by container-executor
> --
>
> Key: YARN-6726
> URL: https://issues.apache.org/jira/browse/YARN-6726
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-6726.001.patch
>
>
> docker inspect, rm, stop, etc are issued through container-executor. Commands 
> other than docker run are not functioning properly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5304) Ship single node HBase config option with single startup command

2017-07-20 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095225#comment-16095225
 ] 

Vrushali C commented on YARN-5304:
--

Went over this jira with [~jrottinghuis] and here is a summary of what we 
discussed:

- Provide a default hbase site via timeline service (perhaps called 
hbase-site-timelineservice.xml) which can be used to start up a single node 
hbase setup 
- ask the user to set HBASE_HOME to point to where the hbase installation is
- ask the user to set yarn.timeline-service.hbase.configuration.file config 
param to where the hbase-site-timelineservice.xml is located 
- ask the user to ensure that hbase is set up on a node which is in the ZK 
quorum setting in the config for the clients to be able to connect to this 
cluster. it could be the node as yarn.timeline-service.hostname or it could be 
a node like RM.
- ask the user to ensure that hbase clients have this 
yarn.timeline-service.hbase.configuration.file in their classpath.
- Provide a set of steps so that a user can start up / stop a single node hbase 
cluster (ensuring that the NN address is in the hbase root dir setting). 
- Also, ensure that it's clear that some steps are one-time only, like copying 
the jars for coprocessor and creating the schema. They need not be done each 
time the cluster is started/stopped/restarted. 

The steps should be made as clear as we can make them. The idea is that these 
steps should be such that users can just copy/paste these commands (with 
minimal, clearly explained variable modifications) and run them. 


> Ship single node HBase config option with single startup command
> 
>
> Key: YARN-5304
> URL: https://issues.apache.org/jira/browse/YARN-5304
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Vrushali C
>  Labels: YARN-5355, yarn-5355-merge-blocker
>
> For small to medium Hadoop deployments we should make it dead-simple to use 
> the timeline service v2. We should have a single command to launch and stop 
> the timelineservice back-end for the default HBase implementation.
> A default config with all the values should be packaged that launches all the 
> needed daemons (on the RM node) with a single command with all the 
> recommended settings.
> Having a timeline admin command, perhaps an init command might be needed, or 
> perhaps the timeline service can even auto-detect that and create tables, 
> deploy needed coprocessors etc.
> The overall purpose is to ensure nobody needs to be an HBase expert to get 
> this going. For those cluster operators with HBase experience, they can 
> choose their own more sophisticated deployment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6804) Allow custom hostname for docker containers in native services

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095220#comment-16095220
 ] 

Hadoop QA commented on YARN-6804:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 2s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
2s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
49s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in yarn-native-services has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
36s{color} | {color:green} hadoop-yarn-project_hadoop-yarn generated 0 new + 72 
unchanged - 2 fixed = 72 total (was 74) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 166 unchanged - 1 fixed = 169 total (was 167) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
56s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
20s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
10s{color} | {color:green} hadoop-yarn-slider-core in the patch passed. {color} 
|
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6804 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878217/YARN-6804-yarn-native-services.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  cc  |
| uname | Linux 9f483d9af027 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 

[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object

2017-07-20 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095217#comment-16095217
 ] 

Chris Douglas commented on YARN-6593:
-

bq. I found it is very important otherwise we cannot support complex placement 
request which need to be updated.
Do you have specific use cases in mind? If there's a restricted form we could 
use to support them (e.g., adjusting cardinality, as in your example), that 
would be easier for us to support and for users to reason about. Since we don't 
have applications that use placement constraints yet, it may be difficult for 
us to predict where they need to change during execution (if at all).

bq. Regarding to semantics, I prefer to apply to all containers placed 
subsequently, this is also the closest behavior of existing YARN. We just need 
to verify updated placement request is still valid, probably we don't need to 
restricted to some parameters.
I don't have a clear definition of validity across placement requests, 
particularly preserving it across a sequence of updates to the constraints. We 
could support relaxations of existing constraints, probably. Still, updates 
also require the LRA scheduler to maintain lineage for all its internal 
structures. A likely implementation will convert users' expressions to some 
normal form, combine those with admin constraints, forecast future allocations, 
inject requests into the scheduler, etc. Even if we could offer well-defined 
semantics for updates, the implementation and maintenance cost could outweigh 
the marginal benefit to users. If the workarounds (like submitting a new 
application or a new set of constraints) are easier to understand, that's 
probably what users will prefer, anyway.

Placement constraint updates also compound the {{ResourceRequest}} problem you 
cited in YARN-6594. Which epoch of the placement constraints applied to a 
container returned by the RM, and for which RR? If a user's application isn't 
getting containers, how is that debugged? If someone wants to reason about a 
group of constraints for a production cluster while applications change clauses 
programmatically at runtime, then that analysis goes from difficult to 
intractable.

You guys are implementing it, but I'd push this to future work.

> [API] Introduce Placement Constraint object
> ---
>
> Key: YARN-6593
> URL: https://issues.apache.org/jira/browse/YARN-6593
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6593.001.patch, YARN-6593.002.patch, 
> YARN-6593.003.patch, YARN-6593.004.patch
>
>
> This JIRA introduces an object for defining placement constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-07-20 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095209#comment-16095209
 ] 

Vrushali C commented on YARN-6820:
--

Thanks  [~varun_saxena] and  [~jlowe] 
Yes, I think we could look at these hbase visibility levels outside of the 
scope of this jira. I think getting a whitelist set of users set via a config 
setting is perhaps a good, simple enough option to add in. 

[~jlowe] would that be ok for your use case? It would like a yarn config 
setting that has a list of users who are allowed to read atsv2 data and the 
timeline reader would basically just do a simple check if the requesting user 
is part of that list and proceed accordingly.  To add in any other user or 
remove any, would mean a config change and a restart of the timeline reader 
process.

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>  Labels: yarn-5355-merge-blocker
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4161) Capacity Scheduler : Assign single or multiple containers per heart beat driven by configuration

2017-07-20 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095165#comment-16095165
 ] 

Wei Yan commented on YARN-4161:
---

The failed testcase passed locally, looks like not related to this patch.
Thanks, [~wangda]. For the documentation part, created YARN-6851 and will put 
details there.

> Capacity Scheduler : Assign single or multiple containers per heart beat 
> driven by configuration
> 
>
> Key: YARN-4161
> URL: https://issues.apache.org/jira/browse/YARN-4161
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Mayank Bansal
>Assignee: Mayank Bansal
>  Labels: oct16-medium
> Attachments: YARN-4161.002.patch, YARN-4161.003.patch, 
> YARN-4161.004.patch, YARN-4161.005.patch, YARN-4161.patch, YARN-4161.patch.1
>
>
> Capacity Scheduler right now schedules multiple containers per heart beat if 
> there are more resources available in the node.
> This approach works fine however in some cases its not distribute the load 
> across the cluster hence throughput of the cluster suffers. I am adding 
> feature to drive that using configuration by that we can control the number 
> of containers assigned per heart beat.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6851) Capacity Scheduler: document configs for controlling # containers allowed to be allocated per node heartbeat

2017-07-20 Thread Wei Yan (JIRA)
Wei Yan created YARN-6851:
-

 Summary: Capacity Scheduler: document configs for controlling # 
containers allowed to be allocated per node heartbeat
 Key: YARN-6851
 URL: https://issues.apache.org/jira/browse/YARN-6851
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Minor


YARN-4161 introduces new configs for controlling how many containers allowed to 
be allocated in each node heartbeat. And we also have offswitchCount config 
before. Would be better to document these configurations in CS section.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6842) Implement a new access type for queue

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095136#comment-16095136
 ] 

Hadoop QA commented on YARN-6842:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  6s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 113 unchanged - 1 fixed = 114 total (was 114) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
37s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 55s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6842 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878208/YARN-6842.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cf45f73f4365 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c8df366 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16504/artifact/p

[jira] [Commented] (YARN-6734) Ensure sub-application user is extracted & sent to timeline service

2017-07-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095074#comment-16095074
 ] 

Varun Saxena commented on YARN-6734:


Had a cursory glance at the patch. Seems to be fine. A couple of comments.

# Can we refactor store*ToSubApplicationTable methods. They are quite similar 
to storeInfo, storeMetrics, storeConfigs, etc. methods.
# In HBaseTimelineWriterImpl#write, L175, null check for subApplicationUser is 
not required. userId.equals(subApplicationUser) would check for null as well.

Will look at patch in further detail by tomorrow.

> Ensure sub-application user is extracted & sent to timeline service
> ---
>
> Key: YARN-6734
> URL: https://issues.apache.org/jira/browse/YARN-6734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Rohith Sharma K S
> Attachments: YARN-6734-YARN-5355.001.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information. YARN-6733
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> YARN-6733 tracks the code changes needed for  table schema creation.
> This jira tracks writing to that table, updating the user name fields to 
> include sub-application user etc. This would mean adding a field to Flow 
> Context which can store an additional user 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6804) Allow custom hostname for docker containers in native services

2017-07-20 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-6804:
-
Attachment: YARN-6804-yarn-native-services.003.patch

Attaching a new patch addressing [~jianhe]'s comments. Since we are performing 
the container/ctr replacement earlier, in encodeYarnID, the zookeeper node is 
now named with the ctr-* version of the container ID. This is a good 
improvement, since we are no longer performing different transformations of the 
container ID for the registry in different places.

> Allow custom hostname for docker containers in native services
> --
>
> Key: YARN-6804
> URL: https://issues.apache.org/jira/browse/YARN-6804
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-6804-yarn-native-services.001.patch, 
> YARN-6804-yarn-native-services.002.patch, 
> YARN-6804-yarn-native-services.003.patch
>
>
> Instead of the default random docker container hostname, we could set a more 
> user-friendly hostname for the container. The default could be a hostname 
> based on the container ID, with an option for the AM to provide a different 
> hostname. In the case of the native services AM, we could provide the 
> hostname that would be created by the registry DNS server. Regardless of 
> whether or not registry DNS is enabled, this would be a more useful hostname 
> for the docker container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6791) Add a new acl control to make YARN acl control perfect

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095051#comment-16095051
 ] 

Hadoop QA commented on YARN-6791:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 113 unchanged - 1 fixed = 114 total (was 114) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
28s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 42m 
50s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6791 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878204/YARN-6791.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7e078d87d7f5 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0ba8cda |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16503/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16503/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
ha

[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object

2017-07-20 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095049#comment-16095049
 ] 

Wangda Tan commented on YARN-6593:
--

[~chris.douglas], 

bq.  Do we want to support users adding new transforms? If not, some of the 
implementation details could be package-private.
I think we should not support users adding new transforms. 

bq. Wangda Tan: what are the semantics of updated constraints? ...
Yeah we missed it in our design doc, however I found it is very important 
otherwise we cannot support complex placement request which need to be updated. 
Regarding to semantics, I prefer to apply to all containers placed 
subsequently, this is also the closest behavior of existing YARN. We just need 
to verify updated placement request is still valid, probably we don't need to 
restricted to some parameters.
What's your thoughts here? 

> [API] Introduce Placement Constraint object
> ---
>
> Key: YARN-6593
> URL: https://issues.apache.org/jira/browse/YARN-6593
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6593.001.patch, YARN-6593.002.patch, 
> YARN-6593.003.patch, YARN-6593.004.patch
>
>
> This JIRA introduces an object for defining placement constraints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6734) Ensure sub-application user is extracted & sent to timeline service

2017-07-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095043#comment-16095043
 ] 

Varun Saxena edited comment on YARN-6734 at 7/20/17 5:33 PM:
-

[~rohithsharma], will we be creating a proxy user to write entities to 
collector in this scenario? As I see from the patch, sub app user is taken from 
request UGI.
Because for a secure cluster, we would assign collector tokens for AM user and 
then send it to AM from RM in Allocate Response.


was (Author: varun_saxena):
[~rohithsharma], will we be creating a proxy user to write entities to 
collector in this scenario? Because from the patch, sub app user is taken from 
request UGI.
Because for a secure cluster, we would assign collector tokens for AM user and 
then send it to AM from RM in Allocate Response.

> Ensure sub-application user is extracted & sent to timeline service
> ---
>
> Key: YARN-6734
> URL: https://issues.apache.org/jira/browse/YARN-6734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Rohith Sharma K S
> Attachments: YARN-6734-YARN-5355.001.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information. YARN-6733
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> YARN-6733 tracks the code changes needed for  table schema creation.
> This jira tracks writing to that table, updating the user name fields to 
> include sub-application user etc. This would mean adding a field to Flow 
> Context which can store an additional user 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6734) Ensure sub-application user is extracted & sent to timeline service

2017-07-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095043#comment-16095043
 ] 

Varun Saxena commented on YARN-6734:


[~rohithsharma], will we be creating a proxy user to write entities to 
collector in this scenario? Because from the patch, sub app user is taken from 
request UGI.
Because for a secure cluster, we would assign collector tokens for AM user and 
then send it to AM from RM in Allocate Response.

> Ensure sub-application user is extracted & sent to timeline service
> ---
>
> Key: YARN-6734
> URL: https://issues.apache.org/jira/browse/YARN-6734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Rohith Sharma K S
> Attachments: YARN-6734-YARN-5355.001.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information. YARN-6733
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> YARN-6733 tracks the code changes needed for  table schema creation.
> This jira tracks writing to that table, updating the user name fields to 
> include sub-application user etc. This would mean adding a field to Flow 
> Context which can store an additional user 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6840) Leverage RMStateStore to store scheduler configuration updates

2017-07-20 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095022#comment-16095022
 ] 

Wangda Tan commented on YARN-6840:
--

[~Naganarasimha], I think storing to a single zk node should be fine. Not sure 
if you could help to do an experiment: what is the size of the 
capacity-scheduler.xml with several hundreds queues after compression? I expect 
it should be less than 30kb. And as number of queue grows, compression rate 
should increase as well (more duplicated fields). We can also store diff of 
changes instead of store whole config file for every update.

> Leverage RMStateStore to store scheduler configuration updates
> --
>
> Key: YARN-6840
> URL: https://issues.apache.org/jira/browse/YARN-6840
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>
> With this change, user doesn't have to setup separate storage system (like 
> LevelDB) to store updates of scheduler configs. And dynamic queue can be used 
> when RM HA is enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6850) Ensure that supplemented timestamp is stored only for flow run metrics

2017-07-20 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094990#comment-16094990
 ] 

Joep Rottinghuis commented on YARN-6850:


+1 looks good.

Note that for data written before this patch that when timeseries are retrieved 
the right data isn't returned.
This is still alpha, so only a few of us testing should be impacted. The TTL 
should work properly after this.

We should merge this to trunk as it will change how data is retrieved.
We should consider adding something to the release notes wrt. incompatability 
between alpha and this latest drop.
When really needed we could write a (MR) job to read all the all data and 
re-write it is there is a use-case that has already started to collect metrics 
and the original data needs to absolutely be retrieved. Given the alpha status 
I think this change is acceptable to fix the TTL functionality.

> Ensure that supplemented timestamp is stored only for flow run metrics
> --
>
> Key: YARN-6850
> URL: https://issues.apache.org/jira/browse/YARN-6850
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6850-YARN-5355.01.patch
>
>
> In timeline service v2,  ColumnHelper#getPutTimestamp supplements the 
> timestamp and is called by ColumnHelper#store. This is not conditional and 
> called for every put.
> We need to ensure that the cell timestamps for metrics in entity and 
> application (and sub application) tables are "correct" timestamps since we 
> will be enabling TTLs for these cells. 
> The supplemented timestamp is to be used only in the flow run table by the 
> coprocessor which intercepts all reads & writes to cells in this table. It 
> looks at the supplemented timestamp to figure out which app id this 
> particular cell belongs to. This is done in order to ensure no collision 
> occurs when two apps belonging to same flow run write the same metric at the 
> same timestamp. 
> Discovered in the discussion in YARN-4455 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6788) Improve performance of resource profile branch

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094973#comment-16094973
 ] 

Hadoop QA commented on YARN-6788:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 3s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
31s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} YARN-3926 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
7s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-3926 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 54s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 24 new + 126 unchanged - 17 fixed = 150 total (was 143) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
22s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
generated 4 new + 0 unchanged - 1 fixed = 4 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 33s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 40s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api |
|  |  Possible null pointer dereference of a in 
org.apache.hadoop.yarn.api.records.Resource.equals(Object)  Dereferenced at 
Resource.java:a in org.apache.hadoop.yarn.api.records.Resource.equals(Object)  
Dereferenced at Resource.java:[line 358] |
|  |  org.apache.hadoop.yarn.api.records.impl.BaseResource.getResources() may 
expose internal representation by returning BaseResource.resources  At 
BaseResource.java:by returning BaseResource.resources  At 
BaseResource.java:[line 129] |
|  |  Public static 
org.apache.hadoop.yarn.util.resource.ResourceUtils.getResourceNamesArray() may 
expose internal representation by returning ResourceUtils.reso

[jira] [Commented] (YARN-6837) Null LocalResource visibility or resource type can crash the nodemanager

2017-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094926#comment-16094926
 ] 

Hudson commented on YARN-6837:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12038 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12038/])
YARN-6837. Null LocalResource visibility or resource type can crash the (jlowe: 
rev c8df3668ecc37c2d58cad35520a762eaec3c8539)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/impl/pb/TestApplicationClientProtocolRecords.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl.java


> Null LocalResource visibility or resource type can crash the nodemanager
> 
>
> Key: YARN-6837
> URL: https://issues.apache.org/jira/browse/YARN-6837
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: YARN-6837-1.patch, YARN-6837.patch
>
>
> When I write an yarn application, I create a LocalResource like this
> {quote}
> LocalResource resource = Records.newRecord(LocalResource.class);
> {quote}
> Because I forget to set the visibilty of it, so the job is failed when I 
> submit it.
> But NodeManager shutdown one by one at the same time, and there is 
> NullPointerExceptionin NodeManager's log:
> {quote}
> 2017-07-18 17:54:09,289 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hadoop   
> IP=10.43.156.177OPERATION=Start Container Request   
> TARGET=ContainerManageImpl  RESULT=SUCCESS  
> APPID=application_1499221670783_0067
> CONTAINERID=container_1499221670783_0067_02_03
> 2017-07-18 17:54:09,292 FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: 
> Error in dispatcher thread
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceSet.addResources(ResourceSet.java:84)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl$RequestResourcesTransition.transition(ContainerImpl.java:868)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl$RequestResourcesTransition.transition(ContainerImpl.java:819)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:1684)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:96)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:1418)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:1411)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
> at java.lang.Thread.run(Thread.java:745)
> 2017-07-18 17:54:09,292 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Start request for container_1499221670783_0067_02_02 by user hadoop
> {quote}
> Then I change my code and still set the visibility to null
> {quote}
> LocalResource resource = LocalResource.newInstance(
> URL.fromURI(dst.toUri()),
> LocalResourceType.FILE, 
> {color:red}null{color},
> fileStatus.getLen(), 
> fileStatus.getModificationTime());
> {quote}
> This error still happen.
> At last I set the visibility to correct value, the error do not happen again.
> So I th

[jira] [Updated] (YARN-6837) Null LocalResource visibility or resource type can crash the nodemanager

2017-07-20 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-6837:
-
Summary: Null LocalResource visibility or resource type can crash the 
nodemanager  (was: When the LocalResource's visibility is null, the NodeManager 
will shutdown)

> Null LocalResource visibility or resource type can crash the nodemanager
> 
>
> Key: YARN-6837
> URL: https://issues.apache.org/jira/browse/YARN-6837
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
> Attachments: YARN-6837-1.patch, YARN-6837.patch
>
>
> When I write an yarn application, I create a LocalResource like this
> {quote}
> LocalResource resource = Records.newRecord(LocalResource.class);
> {quote}
> Because I forget to set the visibilty of it, so the job is failed when I 
> submit it.
> But NodeManager shutdown one by one at the same time, and there is 
> NullPointerExceptionin NodeManager's log:
> {quote}
> 2017-07-18 17:54:09,289 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hadoop   
> IP=10.43.156.177OPERATION=Start Container Request   
> TARGET=ContainerManageImpl  RESULT=SUCCESS  
> APPID=application_1499221670783_0067
> CONTAINERID=container_1499221670783_0067_02_03
> 2017-07-18 17:54:09,292 FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: 
> Error in dispatcher thread
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceSet.addResources(ResourceSet.java:84)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl$RequestResourcesTransition.transition(ContainerImpl.java:868)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl$RequestResourcesTransition.transition(ContainerImpl.java:819)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:1684)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:96)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:1418)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:1411)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
> at java.lang.Thread.run(Thread.java:745)
> 2017-07-18 17:54:09,292 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Start request for container_1499221670783_0067_02_02 by user hadoop
> {quote}
> Then I change my code and still set the visibility to null
> {quote}
> LocalResource resource = LocalResource.newInstance(
> URL.fromURI(dst.toUri()),
> LocalResourceType.FILE, 
> {color:red}null{color},
> fileStatus.getLen(), 
> fileStatus.getModificationTime());
> {quote}
> This error still happen.
> At last I set the visibility to correct value, the error do not happen again.
> So I think the visibility of LocalResource is null will cause NodeManager 
> shutdown.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6842) Implement a new access type for queue

2017-07-20 Thread YunFan Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YunFan Zhou updated YARN-6842:
--
Attachment: YARN-6842.001.patch

> Implement a new access type for queue
> -
>
> Key: YARN-6842
> URL: https://issues.apache.org/jira/browse/YARN-6842
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.8.2
>Reporter: YunFan Zhou
>Assignee: YunFan Zhou
> Fix For: 2.8.2
>
> Attachments: YARN-6842.001.patch
>
>
> When we want to access applications of a queue,  only we can do is become the 
> administer of the queue at present.
> But sometimes we only want  authorize someone view applications of a queue 
> but not modify operation.
> In our current mechanism there isn't any way to meet it, so I will implement 
> a new access type for queue to solve
> this problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6837) When the LocalResource's visibility is null, the NodeManager will shutdown

2017-07-20 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094867#comment-16094867
 ] 

Jason Lowe commented on YARN-6837:
--

The findbug warnings are unrelated to the patch.

+1 lgtm.  Committing this.

> When the LocalResource's visibility is null, the NodeManager will shutdown
> --
>
> Key: YARN-6837
> URL: https://issues.apache.org/jira/browse/YARN-6837
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
> Attachments: YARN-6837-1.patch, YARN-6837.patch
>
>
> When I write an yarn application, I create a LocalResource like this
> {quote}
> LocalResource resource = Records.newRecord(LocalResource.class);
> {quote}
> Because I forget to set the visibilty of it, so the job is failed when I 
> submit it.
> But NodeManager shutdown one by one at the same time, and there is 
> NullPointerExceptionin NodeManager's log:
> {quote}
> 2017-07-18 17:54:09,289 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hadoop   
> IP=10.43.156.177OPERATION=Start Container Request   
> TARGET=ContainerManageImpl  RESULT=SUCCESS  
> APPID=application_1499221670783_0067
> CONTAINERID=container_1499221670783_0067_02_03
> 2017-07-18 17:54:09,292 FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: 
> Error in dispatcher thread
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceSet.addResources(ResourceSet.java:84)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl$RequestResourcesTransition.transition(ContainerImpl.java:868)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl$RequestResourcesTransition.transition(ContainerImpl.java:819)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:1684)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:96)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:1418)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:1411)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
> at java.lang.Thread.run(Thread.java:745)
> 2017-07-18 17:54:09,292 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Start request for container_1499221670783_0067_02_02 by user hadoop
> {quote}
> Then I change my code and still set the visibility to null
> {quote}
> LocalResource resource = LocalResource.newInstance(
> URL.fromURI(dst.toUri()),
> LocalResourceType.FILE, 
> {color:red}null{color},
> fileStatus.getLen(), 
> fileStatus.getModificationTime());
> {quote}
> This error still happen.
> At last I set the visibility to correct value, the error do not happen again.
> So I think the visibility of LocalResource is null will cause NodeManager 
> shutdown.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6791) Add a new acl control to make YARN acl control perfect

2017-07-20 Thread YunFan Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YunFan Zhou updated YARN-6791:
--
Attachment: YARN-6791.004.patch

> Add a new acl control to make YARN acl control perfect
> --
>
> Key: YARN-6791
> URL: https://issues.apache.org/jira/browse/YARN-6791
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.7.2
>Reporter: YunFan Zhou
>Assignee: YunFan Zhou
> Fix For: 2.7.2
>
> Attachments: screenshot-1.png, screenshot-2.png, YARN-6791.001.patch, 
> YARN-6791.002.patch, YARN-6791.003.patch, YARN-6791.004.patch
>
>
> The yarn application acl control is not so perfect which could let us pretty 
> confused sometimes.
> !screenshot-1.png!
> The yarn acl is disabled default, but when we enable it. 
> You will find the others who neither  is yarn admin nor queue admin can not 
> view the application detail. 
> It will show something as blow in YARN RM web ui:
> !screenshot-2.png! 
> So when we enable those configs, you will find it is very inconvenient to 
> view the application status
> when use RM web ui.
> There are two ways to solve the problem:
> 1.  To make the web ui more perfect, and can allow user to login as any uses 
> he want.
> Besides, it should provide a perfect verification mechanism.
> 2. Add a config in YarnConfiguration, which can allow some uses view any 
> applications he want.
> But he has not modify permissions. 
> In this way, the user can view all applications  but can not kill other users 
> applications.
> Of the above two solutions, I will choose the second solution.
> It is low cost but is more useful.
> I will work on this, and upload the patch later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6830) Support quoted strings for environment variables

2017-07-20 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094847#comment-16094847
 ] 

Shane Kumpf commented on YARN-6830:
---

Thanks for the feedback [~templedf]. I see what you are proposing and it's 
close, but unfortunately, that still seems to break down when there are 
embedded commas. I'm all for simplifying this, but after quite a bit of 
testing, I'm still not able to come up with a single regex that works for all 
cases, given the features that java supports. I'm certainly open to additional 
suggestions. 

With what you've proposed above, the result is:

Input:
{code}
String goodEnv = "a1='1',b_2=2,_c=3,d=4,e=,f_win=%JAVA_HOME%"
+ ",g_nix=$JAVA_HOME,h='6,7',i=\"8,9\",j=\"10 11\"";
{code}

Result:
{code}
Key: a1 Value: '1'
Key: b_2 Value: 2
Key: _c Value: 3
Key: d Value: 4
Key: e Value: 
Key: f_win Value: %JAVA_HOME%
Key: g_nix Value: $JAVA_HOME
Key: h Value: '6
Key: i Value: "8
Key: j Value: "10 11"
{code}

> Support quoted strings for environment variables
> 
>
> Key: YARN-6830
> URL: https://issues.apache.org/jira/browse/YARN-6830
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-6830.001.patch
>
>
> There are cases where it is necessary to allow for quoted string literals 
> within environment variables values when passed via the yarn command line 
> interface.
> For example, consider the follow environment variables for a MR map task.
> {{MODE=bar}}
> {{IMAGE_NAME=foo}}
> {{MOUNTS=/tmp/foo,/tmp/bar}}
> When running the MR job, these environment variables are supplied as a comma 
> delimited string.
> {{-Dmapreduce.map.env="MODE=bar,IMAGE_NAME=foo,MOUNTS=/tmp/foo,/tmp/bar"}}
> In this case, {{MOUNTS}} will be parsed and added to the task environment as 
> {{MOUNTS=/tmp/foo}}. Any attempts to quote the embedded comma separated value 
> results in quote characters becoming part of the value, and parsing still 
> breaks down at the comma.
> This issue is to allow for quoting the comma separated value (escaped double 
> or single quote). This was mentioned on YARN-4595 and will impact YARN-5534 
> as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6685) Add job count in to SLS JSON input format

2017-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094846#comment-16094846
 ] 

Hudson commented on YARN-6685:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12037 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12037/])
YARN-6685. Add job count in to SLS JSON input format. (Yufei Gu via (haibochen: 
rev 0ba8cda13549cc4a3946c440016f9d2a9e78740d)
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
* (edit) hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md


> Add job count in to SLS JSON input format
> -
>
> Key: YARN-6685
> URL: https://issues.apache.org/jira/browse/YARN-6685
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha3
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6685.001.patch, YARN-6685.002.patch, 
> YARN-6685.003.patch
>
>
> YARN-6522 made writing SLS workload much simpler by improving SLS JSON input 
> format. There is one more improvement we can do.  We can add job count to 
> simplify configuration of multiple job with the same configuration. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6685) Add job count in to SLS JSON input format

2017-07-20 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094810#comment-16094810
 ] 

Haibo Chen commented on YARN-6685:
--

+1. Committing this shortly.

> Add job count in to SLS JSON input format
> -
>
> Key: YARN-6685
> URL: https://issues.apache.org/jira/browse/YARN-6685
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha3
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6685.001.patch, YARN-6685.002.patch, 
> YARN-6685.003.patch
>
>
> YARN-6522 made writing SLS workload much simpler by improving SLS JSON input 
> format. There is one more improvement we can do.  We can add job count to 
> simplify configuration of multiple job with the same configuration. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6788) Improve performance of resource profile branch

2017-07-20 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6788:
--
Attachment: YARN-6788-YARN-3926.009.patch

Uploading a new patch. Few more improvement, and will try jenkins again.

> Improve performance of resource profile branch
> --
>
> Key: YARN-6788
> URL: https://issues.apache.org/jira/browse/YARN-6788
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-6788-YARN-3926.001.patch, 
> YARN-6788-YARN-3926.002.patch, YARN-6788-YARN-3926.003.patch, 
> YARN-6788-YARN-3926.004.patch, YARN-6788-YARN-3926.005.patch, 
> YARN-6788-YARN-3926.006.patch, YARN-6788-YARN-3926.007.patch, 
> YARN-6788-YARN-3926.008.patch, YARN-6788-YARN-3926.009.patch
>
>
> Currently we could see a 15% performance delta with this branch. 
> Few performance improvements to improve the same.
> Also this patch will handle 
> [comments|https://issues.apache.org/jira/browse/YARN-6761?focusedCommentId=16075418&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16075418]
>  from [~leftnoteasy].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6721) container-executor should have stack checking

2017-07-20 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G reassigned YARN-6721:
-

Assignee: Sunil G

> container-executor should have stack checking
> -
>
> Key: YARN-6721
> URL: https://issues.apache.org/jira/browse/YARN-6721
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, security
>Reporter: Allen Wittenauer
>Assignee: Sunil G
>  Labels: security
>
> As per https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt and 
> given that container-executor is setuid, it should be compiled with stack 
> checking if the compiler supports such features.  (-fstack-check on gcc, 
> -fsanitize=safe-stack on clang, -xcheck=stkovf on "Oracle Solaris Studio", 
> others as we find them, ...)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6721) container-executor should have stack checking

2017-07-20 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094773#comment-16094773
 ] 

Sunil G commented on YARN-6721:
---

I would like to take it up and give it a go. Please assign back if any one is 
interested. I will share a patch soon.

> container-executor should have stack checking
> -
>
> Key: YARN-6721
> URL: https://issues.apache.org/jira/browse/YARN-6721
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, security
>Reporter: Allen Wittenauer
>  Labels: security
>
> As per https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt and 
> given that container-executor is setuid, it should be compiled with stack 
> checking if the compiler supports such features.  (-fstack-check on gcc, 
> -fsanitize=safe-stack on clang, -xcheck=stkovf on "Oracle Solaris Studio", 
> others as we find them, ...)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-07-20 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094769#comment-16094769
 ] 

Jason Lowe commented on YARN-6820:
--

Visibility labels on the surface look like they would map very closely to 
domains in ATSv1.  However as Varun points out, one issue is that we don't get 
the user->labels mapping for free because HBase only ever sees ATSv2 read 
requests from one user, the timeline reader.  Therefore the ATSv2 timeline 
reader would need to map the requesting user to the list of labels that they 
are eligible to read and specify those labels in the HBase read request.  I 
might be missing something, but I don't see how HBase is going to be able to do 
that mapping for us directly during the read request since the actual user 
isn't accessing HBase.  I suppose it could if the timeline reader user is 
allowed to proxy as other users to HBase during reads?  But I didn't think the 
plan was to give any credentials at all for regular users to HBase nor make the 
timeline reader a proxy user.

Mapping a user to their labels may be fairly straightforward since HBase allows 
us to query which labels a user has access to.  Unfortunately it only returns 
the labels that explicitly specified the user and not any labels for the user's 
groups.  Therefore we'd still need to find a way to map a user to their groups 
and lookup the labels for the groups (and recursively if groups contain other 
groups).


> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>  Labels: yarn-5355-merge-blocker
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6373) [YARN-3368] Improvements in cluster-overview page in YARN-UI

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094752#comment-16094752
 ] 

Hadoop QA commented on YARN-6373:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6373 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866891/YARN-6373.001.patch |
| Optional Tests |  asflicense  |
| uname | Linux d885aea50d36 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c21c260 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16501/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Improvements in cluster-overview page in YARN-UI
> 
>
> Key: YARN-6373
> URL: https://issues.apache.org/jira/browse/YARN-6373
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Gergely Novák
> Attachments: YARN-6373.001.patch
>
>
> # Make appld and queueName clickable to navigate to respective pages.
> # Flow layout for panels in cluster-overview page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6373) [YARN-3368] Improvements in cluster-overview page in YARN-UI

2017-07-20 Thread Akhil PB (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094737#comment-16094737
 ] 

Akhil PB edited comment on YARN-6373 at 7/20/17 2:10 PM:
-

It seems that there are changes made for donut charts in most of the pages. 
If every chart is working fine, then pls go ahead [~sunilg] and 
[~GergelyNovak]. Patch looks good to me.
Thoughts?


was (Author: akhilpb):
It seems that there are changes made for donut charts in most of the pages. 
If every chart working is fine, then pls go ahead [~sunilg] and 
[~GergelyNovak]. Patch looks good to me.
Thoughts?

> [YARN-3368] Improvements in cluster-overview page in YARN-UI
> 
>
> Key: YARN-6373
> URL: https://issues.apache.org/jira/browse/YARN-6373
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Gergely Novák
> Attachments: YARN-6373.001.patch
>
>
> # Make appld and queueName clickable to navigate to respective pages.
> # Flow layout for panels in cluster-overview page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6373) [YARN-3368] Improvements in cluster-overview page in YARN-UI

2017-07-20 Thread Akhil PB (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094737#comment-16094737
 ] 

Akhil PB commented on YARN-6373:


It seems that there are changes made for donut charts in most of the pages. 
If every chart working is fine, then pls go ahead [~sunilg] and 
[~GergelyNovak]. Patch looks good to me.
Thoughts?

> [YARN-3368] Improvements in cluster-overview page in YARN-UI
> 
>
> Key: YARN-6373
> URL: https://issues.apache.org/jira/browse/YARN-6373
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Gergely Novák
> Attachments: YARN-6373.001.patch
>
>
> # Make appld and queueName clickable to navigate to respective pages.
> # Flow layout for panels in cluster-overview page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6791) Add a new acl control to make YARN acl control perfect

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094736#comment-16094736
 ] 

Hadoop QA commented on YARN-6791:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} YARN-6791 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6791 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878178/YARN-6791.003.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16500/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add a new acl control to make YARN acl control perfect
> --
>
> Key: YARN-6791
> URL: https://issues.apache.org/jira/browse/YARN-6791
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.7.2
>Reporter: YunFan Zhou
>Assignee: YunFan Zhou
> Fix For: 2.7.2
>
> Attachments: screenshot-1.png, screenshot-2.png, YARN-6791.001.patch, 
> YARN-6791.002.patch, YARN-6791.003.patch
>
>
> The yarn application acl control is not so perfect which could let us pretty 
> confused sometimes.
> !screenshot-1.png!
> The yarn acl is disabled default, but when we enable it. 
> You will find the others who neither  is yarn admin nor queue admin can not 
> view the application detail. 
> It will show something as blow in YARN RM web ui:
> !screenshot-2.png! 
> So when we enable those configs, you will find it is very inconvenient to 
> view the application status
> when use RM web ui.
> There are two ways to solve the problem:
> 1.  To make the web ui more perfect, and can allow user to login as any uses 
> he want.
> Besides, it should provide a perfect verification mechanism.
> 2. Add a config in YarnConfiguration, which can allow some uses view any 
> applications he want.
> But he has not modify permissions. 
> In this way, the user can view all applications  but can not kill other users 
> applications.
> Of the above two solutions, I will choose the second solution.
> It is low cost but is more useful.
> I will work on this, and upload the patch later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6791) Add a new acl control to make YARN acl control perfect

2017-07-20 Thread YunFan Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094734#comment-16094734
 ] 

YunFan Zhou commented on YARN-6791:
---

[~dan...@cloudera.com] Yes, I want to close it. And use my new JIRA instead.
Please help me close it, thank you.

> Add a new acl control to make YARN acl control perfect
> --
>
> Key: YARN-6791
> URL: https://issues.apache.org/jira/browse/YARN-6791
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.7.2
>Reporter: YunFan Zhou
>Assignee: YunFan Zhou
> Fix For: 2.7.2
>
> Attachments: screenshot-1.png, screenshot-2.png, YARN-6791.001.patch, 
> YARN-6791.002.patch, YARN-6791.003.patch
>
>
> The yarn application acl control is not so perfect which could let us pretty 
> confused sometimes.
> !screenshot-1.png!
> The yarn acl is disabled default, but when we enable it. 
> You will find the others who neither  is yarn admin nor queue admin can not 
> view the application detail. 
> It will show something as blow in YARN RM web ui:
> !screenshot-2.png! 
> So when we enable those configs, you will find it is very inconvenient to 
> view the application status
> when use RM web ui.
> There are two ways to solve the problem:
> 1.  To make the web ui more perfect, and can allow user to login as any uses 
> he want.
> Besides, it should provide a perfect verification mechanism.
> 2. Add a config in YarnConfiguration, which can allow some uses view any 
> applications he want.
> But he has not modify permissions. 
> In this way, the user can view all applications  but can not kill other users 
> applications.
> Of the above two solutions, I will choose the second solution.
> It is low cost but is more useful.
> I will work on this, and upload the patch later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6791) Add a new acl control to make YARN acl control perfect

2017-07-20 Thread YunFan Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YunFan Zhou updated YARN-6791:
--
Attachment: YARN-6791.003.patch

> Add a new acl control to make YARN acl control perfect
> --
>
> Key: YARN-6791
> URL: https://issues.apache.org/jira/browse/YARN-6791
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.7.2
>Reporter: YunFan Zhou
>Assignee: YunFan Zhou
> Fix For: 2.7.2
>
> Attachments: screenshot-1.png, screenshot-2.png, YARN-6791.001.patch, 
> YARN-6791.002.patch, YARN-6791.003.patch
>
>
> The yarn application acl control is not so perfect which could let us pretty 
> confused sometimes.
> !screenshot-1.png!
> The yarn acl is disabled default, but when we enable it. 
> You will find the others who neither  is yarn admin nor queue admin can not 
> view the application detail. 
> It will show something as blow in YARN RM web ui:
> !screenshot-2.png! 
> So when we enable those configs, you will find it is very inconvenient to 
> view the application status
> when use RM web ui.
> There are two ways to solve the problem:
> 1.  To make the web ui more perfect, and can allow user to login as any uses 
> he want.
> Besides, it should provide a perfect verification mechanism.
> 2. Add a config in YarnConfiguration, which can allow some uses view any 
> applications he want.
> But he has not modify permissions. 
> In this way, the user can view all applications  but can not kill other users 
> applications.
> Of the above two solutions, I will choose the second solution.
> It is low cost but is more useful.
> I will work on this, and upload the patch later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6734) Ensure sub-application user is extracted & sent to timeline service

2017-07-20 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-6734:

Attachment: YARN-6734-YARN-5355.001.patch

Attaching v0 patch for writing data into subApplicationTable. This patch can be 
applied on top YARN-6733. Majorly look into these 3 classes 
HBaseTimelineWriterImpl, TimelineCollector, TimelineWriter. 

> Ensure sub-application user is extracted & sent to timeline service
> ---
>
> Key: YARN-6734
> URL: https://issues.apache.org/jira/browse/YARN-6734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Rohith Sharma K S
> Attachments: YARN-6734-YARN-5355.001.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information. YARN-6733
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> YARN-6733 tracks the code changes needed for  table schema creation.
> This jira tracks writing to that table, updating the user name fields to 
> include sub-application user etc. This would mean adding a field to Flow 
> Context which can store an additional user 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6850) Ensure that supplemented timestamp is stored only for flow run metrics

2017-07-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094700#comment-16094700
 ] 

Varun Saxena commented on YARN-6850:


Tests not required.

> Ensure that supplemented timestamp is stored only for flow run metrics
> --
>
> Key: YARN-6850
> URL: https://issues.apache.org/jira/browse/YARN-6850
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6850-YARN-5355.01.patch
>
>
> In timeline service v2,  ColumnHelper#getPutTimestamp supplements the 
> timestamp and is called by ColumnHelper#store. This is not conditional and 
> called for every put.
> We need to ensure that the cell timestamps for metrics in entity and 
> application (and sub application) tables are "correct" timestamps since we 
> will be enabling TTLs for these cells. 
> The supplemented timestamp is to be used only in the flow run table by the 
> coprocessor which intercepts all reads & writes to cells in this table. It 
> looks at the supplemented timestamp to figure out which app id this 
> particular cell belongs to. This is done in order to ensure no collision 
> occurs when two apps belonging to same flow run write the same metric at the 
> same timestamp. 
> Discovered in the discussion in YARN-4455 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6733) Add table for storing sub-application entities

2017-07-20 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094682#comment-16094682
 ] 

Rohith Sharma K S commented on YARN-6733:
-

Adding to Varun comments,  SubApplicaitonTable does not have flowVersion but 
SubApplicationColumn has flow version. I think this can be removed. 

> Add table for storing sub-application entities
> --
>
> Key: YARN-6733
> URL: https://issues.apache.org/jira/browse/YARN-6733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: IMG_7040.JPG, YARN-6733-YARN-5355.001.patch, 
> YARN-6733-YARN-5355.002.patch, YARN-6733-YARN-5355.003.patch, 
> YARN-6733-YARN-5355.004.patch, YARN-6733-YARN-5355.005.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information.
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> This jira tracks the code changes needed for  table schema creation.
> I will file other jiras for writing to that table, updating the user name 
> fields to include sub-application user etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6850) Ensure that supplemented timestamp is stored only for flow run metrics

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094665#comment-16094665
 ] 

Hadoop QA commented on YARN-6850:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
48s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} YARN-5355 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
28s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 in YARN-5355 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6850 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878168/YARN-6850-YARN-5355.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f69e97b6d76d 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 743e473 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16499/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16499/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16499/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ensure that supplemented timestam

[jira] [Updated] (YARN-6850) Ensure that supplemented timestamp is stored only for flow run metrics

2017-07-20 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-6850:
---
Attachment: YARN-6850-YARN-5355.01.patch

Attaching a quick fix.

> Ensure that supplemented timestamp is stored only for flow run metrics
> --
>
> Key: YARN-6850
> URL: https://issues.apache.org/jira/browse/YARN-6850
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6850-YARN-5355.01.patch
>
>
> In timeline service v2,  ColumnHelper#getPutTimestamp supplements the 
> timestamp and is called by ColumnHelper#store. This is not conditional and 
> called for every put.
> We need to ensure that the cell timestamps for metrics in entity and 
> application (and sub application) tables are "correct" timestamps since we 
> will be enabling TTLs for these cells. 
> The supplemented timestamp is to be used only in the flow run table by the 
> coprocessor which intercepts all reads & writes to cells in this table. It 
> looks at the supplemented timestamp to figure out which app id this 
> particular cell belongs to. This is done in order to ensure no collision 
> occurs when two apps belonging to same flow run write the same metric at the 
> same timestamp. 
> Discovered in the discussion in YARN-4455 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6850) Ensure that supplemented timestamp is stored only for flow run metrics

2017-07-20 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena reassigned YARN-6850:
--

Assignee: Varun Saxena

> Ensure that supplemented timestamp is stored only for flow run metrics
> --
>
> Key: YARN-6850
> URL: https://issues.apache.org/jira/browse/YARN-6850
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
>
> In timeline service v2,  ColumnHelper#getPutTimestamp supplements the 
> timestamp and is called by ColumnHelper#store. This is not conditional and 
> called for every put.
> We need to ensure that the cell timestamps for metrics in entity and 
> application (and sub application) tables are "correct" timestamps since we 
> will be enabling TTLs for these cells. 
> The supplemented timestamp is to be used only in the flow run table by the 
> coprocessor which intercepts all reads & writes to cells in this table. It 
> looks at the supplemented timestamp to figure out which app id this 
> particular cell belongs to. This is done in order to ensure no collision 
> occurs when two apps belonging to same flow run write the same metric at the 
> same timestamp. 
> Discovered in the discussion in YARN-4455 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6733) Add table for storing sub-application entities

2017-07-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094484#comment-16094484
 ] 

Varun Saxena commented on YARN-6733:


Thanks [~vrushalic] for the patch and [~rohithsharma] for the reviews.
Patch LGTM. I noticed a few nits while reading the code though.

# Rowkey mentioned in class javadoc of SubApplicationTable class is incorrect. 
In the same javadoc, configKey2 is mentioned under column family, metrics.
# SubApplicationRowKey L181 mentions the rowkey incorrectly.  clusterid should 
follow subappuserid.
# SubApplicationRowKey L204 the variable is named as enitityIdPrefix. Should be 
entityIdPrefix

> Add table for storing sub-application entities
> --
>
> Key: YARN-6733
> URL: https://issues.apache.org/jira/browse/YARN-6733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: IMG_7040.JPG, YARN-6733-YARN-5355.001.patch, 
> YARN-6733-YARN-5355.002.patch, YARN-6733-YARN-5355.003.patch, 
> YARN-6733-YARN-5355.004.patch, YARN-6733-YARN-5355.005.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information.
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> This jira tracks the code changes needed for  table schema creation.
> I will file other jiras for writing to that table, updating the user name 
> fields to include sub-application user etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6102) On failover RM can crash due to unregistered event to AsyncDispatcher

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094460#comment-16094460
 ] 

Hadoop QA commented on YARN-6102:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 138 unchanged - 8 fixed = 138 total (was 146) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6102 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878121/YARN-6102.07.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dda060a31f97 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c21c260 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16498/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16498/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16498/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> On failover RM can crash due to unregistered event to AsyncDispatcher
> --

[jira] [Commented] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-07-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094452#comment-16094452
 ] 

Varun Saxena commented on YARN-6130:


Thanks [~rohithsharma] for the review.

bq. newly added proto i.e AMCollectorInfoProto has rm_identifier and version. 
Why does these required for AM?
The intention here is to avoid updating the token in AM UGI on every allocate 
response. We can potentially cache the RMID and version to ensure that the 
version of token coming from RM is same as the one already updated by AM in its 
UGI. Thoughts?

bq. I see AppCollectorData#hasSameVersion added. Where it will be used? Trying 
to understanding its usage.
Not required. Was planning to use it for use case above. A similar check would 
be required in AMCollectorInfo instead.

bq.  IAC, collector_address and token can be kept in separate proto may be name 
it as generic i.e CollectorInfo or any other?
Ok. We can name it as CollectorInfo instead of AMCollectorInfo.

bq. for every heartbeat, we are creating AMCollectorInfo object. Could it be 
cached?
Yes, it can be in RMAppImpl.



> [ATSv2 Security] Generate a delegation token for AM when app collector is 
> created and pass it to AM via NM and RM
> -
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch, 
> YARN-6130-YARN-5355.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6842) Implement a new access type for queue

2017-07-20 Thread YunFan Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094404#comment-16094404
 ] 

YunFan Zhou commented on YARN-6842:
---

[~templedf] Yes, you are right.

> Implement a new access type for queue
> -
>
> Key: YARN-6842
> URL: https://issues.apache.org/jira/browse/YARN-6842
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.8.1
>Reporter: YunFan Zhou
>Assignee: YunFan Zhou
>
> When we want to access applications of a queue,  only we can do is become the 
> administer of the queue at present.
> But sometimes we only want  authorize someone view applications of a queue 
> but not modify operation.
> In our current mechanism there isn't any way to meet it, so I will implement 
> a new access type for queue to solve
> this problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4455) Support fetching metrics by time range

2017-07-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094391#comment-16094391
 ] 

Varun Saxena commented on YARN-4455:


Thanks [~rohithsharma] for the review and commit.
Thanks [~vrushalic] for additional reviews.


> Support fetching metrics by time range
> --
>
> Key: YARN-4455
> URL: https://issues.apache.org/jira/browse/YARN-4455
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-5355
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-4455-YARN-5355.01.patch, 
> YARN-4455-YARN-5355.02.patch, YARN-4455-YARN-5355.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6102) On failover RM can crash due to unregistered event to AsyncDispatcher

2017-07-20 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-6102:

Attachment: YARN-6102.07.patch

attached the patch fixing java doc errors. 

> On failover RM can crash due to unregistered event to AsyncDispatcher
> -
>
> Key: YARN-6102
> URL: https://issues.apache.org/jira/browse/YARN-6102
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Ajith S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: eventOrder.JPG, YARN-6102.01.patch, YARN-6102.02.patch, 
> YARN-6102.03.patch, YARN-6102.04.patch, YARN-6102.05.patch, 
> YARN-6102.06.patch, YARN-6102.07.patch
>
>
> {code}2017-01-17 16:42:17,911 FATAL [AsyncDispatcher event handler] 
> event.AsyncDispatcher (AsyncDispatcher.java:dispatch(200)) - Error in 
> dispatcher thread
> java.lang.Exception: No handler for registered for class 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeEventType
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:196)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:120)
> at java.lang.Thread.run(Thread.java:745)
> 2017-01-17 16:42:17,914 INFO  [AsyncDispatcher ShutDown handler] 
> event.AsyncDispatcher (AsyncDispatcher.java:run(303)) - Exiting, bbye..{code}
> The same stack i was also noticed in {{TestResourceTrackerOnHA}} exits 
> abnormally, after some analysis, i was able to reproduce.
> Once the nodeHeartBeat is sent to RM, inside 
> {{org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService.nodeHeartbeat(NodeHeartbeatRequest)}},
>  before sending it to dispatcher through
> {{this.rmContext.getDispatcher().getEventHandler().handle(nodeStatusEvent);}} 
> if RM failover is called, the dispatcher is reset
> The new dispatcher is however first started and then the events are 
> registered at 
> {{org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.reinitialize(boolean)}}
> So event order will look like
> 1. Send Node heartbeat to {{ResourceTrackerService}}
> 2. In {{ResourceTrackerService.nodeHeartbeat}}, before passing to dispatcher 
> call RM failover
> 3. In RM Failover, current active will reset dispatcher @reinitialize i.e ( 
> {{resetDispatcher();}} + {{createAndInitActiveServices();}} )
> Now between {{resetDispatcher();}} and {{createAndInitActiveServices();}} , 
> the {{ResourceTrackerService.nodeHeartbeat}} invokes dipatcher
> This will cause the above error as at point of time when {{STATUS_UPDATE}} 
> event is given to dispatcher in {{ResourceTrackerService}} , the new 
> dispatcher(from the failover) may be started but not yet registered for events
> Using same steps(with pausing JVM at debug), i was able to reproduce this in 
> production cluster also. for {{STATUS_UPDATE}} active service event, when the 
> service is yet to forward the event to RM dispatcher but a failover is called 
> and dispatcher reset is between {{resetDispatcher();}} & 
> {{createAndInitActiveServices();}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4455) Support fetching metrics by time range

2017-07-20 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094371#comment-16094371
 ] 

Rohith Sharma K S commented on YARN-4455:
-

committed to YARN-5355-branch-2 as well. 

> Support fetching metrics by time range
> --
>
> Key: YARN-4455
> URL: https://issues.apache.org/jira/browse/YARN-4455
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-5355
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-4455-YARN-5355.01.patch, 
> YARN-4455-YARN-5355.02.patch, YARN-4455-YARN-5355.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4455) Support fetching metrics by time range

2017-07-20 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-4455:

Fix Version/s: YARN-5355-branch-2

> Support fetching metrics by time range
> --
>
> Key: YARN-4455
> URL: https://issues.apache.org/jira/browse/YARN-4455
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-5355
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-4455-YARN-5355.01.patch, 
> YARN-4455-YARN-5355.02.patch, YARN-4455-YARN-5355.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6844) AMRMClientImpl.checkNodeLabelExpression() has wrong error message

2017-07-20 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094367#comment-16094367
 ] 

Daniel Templeton commented on YARN-6844:


LGTM +1.  I'll commit it later.

> AMRMClientImpl.checkNodeLabelExpression() has wrong error message
> -
>
> Key: YARN-6844
> URL: https://issues.apache.org/jira/browse/YARN-6844
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.1, 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Manikandan R
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6844.001.patch
>
>
> It says, "Cannot specify more than two node labels in a single node label 
> expression," bit it should say that you can't have more than *one*.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6733) Add table for storing sub-application entities

2017-07-20 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094365#comment-16094365
 ] 

Rohith Sharma K S commented on YARN-6733:
-

As offline discussion with Vrushali,  YARN-6850 is raised to handle timestamp 
issue. 

[~varun_saxena] would you like to have look at the final patch? If no more 
objections, I will commit it tomorrow. 

> Add table for storing sub-application entities
> --
>
> Key: YARN-6733
> URL: https://issues.apache.org/jira/browse/YARN-6733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: IMG_7040.JPG, YARN-6733-YARN-5355.001.patch, 
> YARN-6733-YARN-5355.002.patch, YARN-6733-YARN-5355.003.patch, 
> YARN-6733-YARN-5355.004.patch, YARN-6733-YARN-5355.005.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information.
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> This jira tracks the code changes needed for  table schema creation.
> I will file other jiras for writing to that table, updating the user name 
> fields to include sub-application user etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4455) Support fetching metrics by time range

2017-07-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094348#comment-16094348
 ] 

Varun Saxena commented on YARN-4455:


bq. Does it required to commit YARN-5355-branch-2? Is this fix is compatible 
with HBase version of YARN-5355-branch-2?
Yes, we bumped the version of HBase on YARN-5355-branch-2 too.

> Support fetching metrics by time range
> --
>
> Key: YARN-4455
> URL: https://issues.apache.org/jira/browse/YARN-4455
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-5355
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Fix For: YARN-5355
>
> Attachments: YARN-4455-YARN-5355.01.patch, 
> YARN-4455-YARN-5355.02.patch, YARN-4455-YARN-5355.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6685) Add job count in to SLS JSON input format

2017-07-20 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094300#comment-16094300
 ] 

Yufei Gu commented on YARN-6685:


The test failure is unrelated.

> Add job count in to SLS JSON input format
> -
>
> Key: YARN-6685
> URL: https://issues.apache.org/jira/browse/YARN-6685
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha3
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6685.001.patch, YARN-6685.002.patch, 
> YARN-6685.003.patch
>
>
> YARN-6522 made writing SLS workload much simpler by improving SLS JSON input 
> format. There is one more improvement we can do.  We can add job count to 
> simplify configuration of multiple job with the same configuration. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6102) On failover RM can crash due to unregistered event to AsyncDispatcher

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094296#comment-16094296
 ] 

Hadoop QA commented on YARN-6102:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 137 unchanged - 8 fixed = 137 total (was 145) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 23s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6102 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878100/YARN-6102.06.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7c3681434861 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c21c260 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/16496/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16496/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16496/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16496/console |
| Powered 

[jira] [Commented] (YARN-6685) Add job count in to SLS JSON input format

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094285#comment-16094285
 ] 

Hadoop QA commented on YARN-6685:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 33s{color} 
| {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.sls.TestReservationSystemInvariants |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6685 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878102/YARN-6685.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b511d5d3192b 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c21c260 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16497/artifact/patchprocess/patch-unit-hadoop-tools_hadoop-sls.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16497/testReport/ |
| modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16497/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add job count in to SLS JSON input format
> -
>
> Key: YARN-6685
> URL: https://issues.apache.org/jira/browse/YARN-6685
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha3
>   

[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-07-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094275#comment-16094275
 ] 

Varun Saxena commented on YARN-6820:


As all reads from Timeline Reader would be with ats user, so would specifying 
authorizations/labels in the scan be sufficient to filter out the results? If 
yes, we can utilize this for our eventual authorization solution, with 
user/groups/domain written as labels during put and specified as authorization 
during get/scan.

By the way for whitelisting/blacklisting, for our next drop, a simple 
configuration with a check in reader not suffice?

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>  Labels: yarn-5355-merge-blocker
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org