[jira] [Commented] (YARN-6435) [ATSv2] Can't retrieve more than 1000 versions of metrics in time series

2017-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15996206#comment-15996206
 ] 

Hadoop QA commented on YARN-6435:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
32s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} YARN-5355 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 in YARN-5355 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6435 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866331/YARN-6435.YARN-5355.0001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ada5aca4ee51 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 1f98134 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/15820/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15820/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15820/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (YARN-6522) Make SLS JSON input file format simple and scalable

2017-05-03 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15996191#comment-15996191
 ] 

Robert Kanter commented on YARN-6522:
-

Some more comments:
- {{stjp.getNumNodes()/stjp.getNodesPerRack()}} could result in a divide by 
zero because no validation is done on {{nodes_per_rack}}.  This falls under one 
of my previous comments about validating user input.  Your previous comment 
said you filed YARN-6511 to address that, but I think you accidentally wrote 
down the wrong JIRA
- Should {{count = Math.max(count, 1);}} be {{Math.min}}?
- In the docs for {{num.racks}}, we should add a note that it will divide 
{{num.nodes}} into the racks evenly (well, mostly evenly).  Otherwise, someone 
might assume it will put {{num.racks}} nodes into each rack.

> Make SLS JSON input file format simple and scalable
> ---
>
> Key: YARN-6522
> URL: https://issues.apache.org/jira/browse/YARN-6522
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6522.001.patch, YARN-6522.002.patch, 
> YARN-6522.003.patch
>
>
> SLS input format is verbose, and it doesn't scale out. We can improve it in 
> these ways:
> # We need to configure tasks one by one if there are more than one task in a 
> job, which means the job configuration usually includes lots of redundant 
> items. To specify the number of task for task configuration will solve this 
> issue.
> # Container host is useful for locality testing. It is obnoxious to specify 
> container host for each task for tests unrelated to locality. We would like 
> to make it optional.
> # For most tests, we don't care about job.id. Make it optional and generated 
> automatically by default.
> # job.finish.ms doesn't make sense, just remove it.
> # container type and container priority should be optional as well. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6374) Improve test coverage and add utility classes for common Docker operations

2017-05-03 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15996188#comment-15996188
 ] 

Sidharta Seethana commented on YARN-6374:
-

[~shaneku...@gmail.com], could you please fix the (new) compiler warnings? 
Thanks. 

> Improve test coverage and add utility classes for common Docker operations
> --
>
> Key: YARN-6374
> URL: https://issues.apache.org/jira/browse/YARN-6374
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6374.001.patch, YARN-6374.002.patch, 
> YARN-6374.003.patch, YARN-6374-branch-2.001.patch
>
>
> Currently, it is tedious to execute Docker related operations due to the 
> plumbing needed to define the DockerCommand, writing the command file, 
> configuring privileged operation, and finally executing the command and 
> validating the result. Obtaining the current status of a Docker container can 
> also be improved. Finally, the test coverage is lacking for Docker Commands. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6435) [ATSv2] Can't retrieve more than 1000 versions of metrics in time series

2017-05-03 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-6435:

Attachment: YARN-6435.YARN-5355.0001.patch

Updated the patch by adding configurations to set max-versions for metrics. 
Considering worst case of metrics storage per day, I also changed default value 
for max-versions to 10K from 1K. This is because considering every metrics 
publishes for every 10seconds. So, per day  6(metrics per minute) * 60 minutes 
* 24 hour = 8640 max-versions. I rounded off to 10K.

> [ATSv2] Can't retrieve more than 1000 versions of metrics in time series
> 
>
> Key: YARN-6435
> URL: https://issues.apache.org/jira/browse/YARN-6435
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-6435.0001.patch, YARN-6435.YARN-5355.0001.patch
>
>
> It is observed that, even though *metricslimit* is set to 1500, maximum 
> number of metrics values retrieved is 1000. 
> This is due to, while creating EntityTable, metrics column family max version 
> is specified as 1000 which is hardcoded in 
> {{EntityTable#DEFAULT_METRICS_MAX_VERSIONS}}. So, HBase will return max 
> version with following {{MIN(cf max version , user provided max version)}}. 
> This behavior is contradicting the documentation which claims that 
> {code}
> metricslimit - If specified, defines the number of metrics to return. 
> Considered only if fields contains METRICS/ALL or metricstoretrieve is 
> specified. Ignored otherwise. The maximum possible value for metricslimit can 
> be maximum value of Integer. If it is not specified or has a value less than 
> 1, and metrics have to be retrieved, then metricslimit will be considered as 
> 1 i.e. latest single value of metric(s) will be returned.
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4166) Support changing container cpu resource

2017-05-03 Thread Yang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated YARN-4166:

Attachment: YARN-4166.004.patch

> Support changing container cpu resource
> ---
>
> Key: YARN-4166
> URL: https://issues.apache.org/jira/browse/YARN-4166
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, nodemanager, resourcemanager
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Jian He
>Assignee: Yang Wang
> Attachments: YARN-4166.001.patch, YARN-4166.002.patch, 
> YARN-4166.003.patch, YARN-4166.004.patch, YARN-4166-branch2.8-001.patch
>
>
> Memory resizing is now supported, we need to support the same for cpu.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6555) Enable flow context read (& corresponding write) for recovering application with NM restart

2017-05-03 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15996174#comment-15996174
 ] 

Rohith Sharma K S commented on YARN-6555:
-

Is this in YARN-5355 or YARN-5355-branch-2? IIRC, this error  I had seen in 
YARN-5355-branch-2 but NOT in YARN-5355. 

> Enable flow context read (& corresponding write) for recovering application 
> with NM restart 
> 
>
> Key: YARN-6555
> URL: https://issues.apache.org/jira/browse/YARN-6555
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>
> If timeline service v2 is enabled and NM is restarted with recovery enabled, 
> then NM fails to start and throws an error as  "flow context can't be null".
> This is happening because the flow context did not exist before but now that 
> timeline service v2 is enabled, ApplicationImpl expects it to exist. 
> This would also happen even if flow context existed before but since we are 
> not persisting it / reading it during 
> ContainerManagerImpl#recoverApplication, it does not get passed in to 
> ApplicationImpl.
> full stack trace
> {code}
> 2017-05-03 21:51:52,178 FATAL 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting 
> NodeManager
> java.lang.IllegalArgumentException: flow context cannot be null
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.(ApplicationImpl.java:104)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.(ApplicationImpl.java:90)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recoverApplication(ContainerManagerImpl.java:318)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recover(ContainerManagerImpl.java:280)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:267)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:276)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:588)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:649)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4166) Support changing container cpu resource

2017-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15996160#comment-15996160
 ] 

Hadoop QA commented on YARN-4166:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
42s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 2 new + 193 unchanged - 0 fixed = 195 total (was 193) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
50s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-4166 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866309/YARN-4166.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 43579007aa5e 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 81092b1 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/15819/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15819/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15819/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 

[jira] [Updated] (YARN-4166) Support changing container cpu resource

2017-05-03 Thread Yang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated YARN-4166:

Attachment: YARN-4166.003.patch

> Support changing container cpu resource
> ---
>
> Key: YARN-4166
> URL: https://issues.apache.org/jira/browse/YARN-4166
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, nodemanager, resourcemanager
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Jian He
>Assignee: Yang Wang
> Attachments: YARN-4166.001.patch, YARN-4166.002.patch, 
> YARN-4166.003.patch, YARN-4166-branch2.8-001.patch
>
>
> Memory resizing is now supported, we need to support the same for cpu.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6556) Implement a FileSystem that reads from HTTP

2017-05-03 Thread Haohui Mai (JIRA)
Haohui Mai created YARN-6556:


 Summary: Implement a FileSystem that reads from HTTP
 Key: YARN-6556
 URL: https://issues.apache.org/jira/browse/YARN-6556
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai


We have a use case where YARN applications would like to localize resources 
from Artifactory. Putting the resources on HDFS itself might not be ideal as we 
would like to leverage Artifactory to manage different versions of the 
resources.

It would be nice to have something like {{HttpFileSystem}} that implements the 
Hadoop filesystem API and reads from a HTTP endpoint.

Note that Samza has implemented the proposal by themselves:

https://github.com/apache/samza/blob/master/samza-yarn/src/main/scala/org/apache/samza/util/hadoop/HttpFileSystem.scala

The downside of this approach is that it requires the YARN cluster to put the 
Samza jar into the classpath for each NM.

It would be much nicer for Hadoop to have this feature built-in.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5411) Create a proxy chain for ApplicationClientProtocol in the Router

2017-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995992#comment-15995992
 ] 

Hadoop QA commented on YARN-5411:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 0s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
23s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
1s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
 5s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
15s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
YARN-2915 has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
58s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in YARN-2915 has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 15m 43s{color} 
| {color:red} root generated 1 new + 777 unchanged - 1 fixed = 778 total (was 
778) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
53s{color} | {color:green} root: The patch generated 0 new + 207 unchanged - 4 
fixed = 207 total (was 211) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
39s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
26s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
19s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 

[jira] [Commented] (YARN-6374) Improve test coverage and add utility classes for common Docker operations

2017-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995989#comment-15995989
 ] 

Hadoop QA commented on YARN-6374:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
54s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
50s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 23s{color} 
| {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-jdk1.8.0_131
 with JDK v1.8.0_131 generated 1 new + 20 unchanged - 1 fixed = 21 total (was 
21) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 27s{color} 
| {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-jdk1.7.0_121
 with JDK v1.7.0_121 generated 2 new + 22 unchanged - 1 fixed = 24 total (was 
23) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m  5s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | 
hadoop.yarn.server.nodemanager.webapp.TestNMWebServer |
| JDK v1.7.0_121 Failed junit tests | 
hadoop.yarn.server.nodemanager.webapp.TestNMWebServer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:8515d35 |
| JIRA Issue | YARN-6374 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866297/YARN-6374-branch-2.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  

[jira] [Commented] (YARN-6555) Enable flow context read (& corresponding write) for recovering application with NM restart

2017-05-03 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995963#comment-15995963
 ] 

Vrushali C commented on YARN-6555:
--

In the case that there isn't a prior flow context, we also want to think of a 
default. 

Also, let's say the NM is constantly crashing (for some unrelated reason) and 
trying to come up. Now each time the recovery tries to instantiate a default 
flow context,  it should be such that the recovered app impl info always maps 
to the same record for that application id and does not end up creating new 
rows in hbase each time the NM restarts. 

> Enable flow context read (& corresponding write) for recovering application 
> with NM restart 
> 
>
> Key: YARN-6555
> URL: https://issues.apache.org/jira/browse/YARN-6555
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>
> If timeline service v2 is enabled and NM is restarted with recovery enabled, 
> then NM fails to start and throws an error as  "flow context can't be null".
> This is happening because the flow context did not exist before but now that 
> timeline service v2 is enabled, ApplicationImpl expects it to exist. 
> This would also happen even if flow context existed before but since we are 
> not persisting it / reading it during 
> ContainerManagerImpl#recoverApplication, it does not get passed in to 
> ApplicationImpl.
> full stack trace
> {code}
> 2017-05-03 21:51:52,178 FATAL 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting 
> NodeManager
> java.lang.IllegalArgumentException: flow context cannot be null
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.(ApplicationImpl.java:104)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.(ApplicationImpl.java:90)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recoverApplication(ContainerManagerImpl.java:318)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recover(ContainerManagerImpl.java:280)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:267)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:276)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:588)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:649)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6522) Make SLS JSON input file format simple and scalable

2017-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995950#comment-15995950
 ] 

Hadoop QA commented on YARN-6522:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-tools/hadoop-sls in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-tools/hadoop-sls: The patch generated 0 new + 
45 unchanged - 1 fixed = 45 total (was 46) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 41s{color} 
| {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.sls.TestSLSRunner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6522 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866294/YARN-6522.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 75585ad448b7 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 81092b1 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/15817/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-sls-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15817/artifact/patchprocess/patch-unit-hadoop-tools_hadoop-sls.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15817/testReport/ |
| modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15817/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make SLS JSON input file format simple and scalable
> ---
>
> Key: YARN-6522
> URL: 

[jira] [Updated] (YARN-6555) Enable flow context read (& corresponding write) for recovering application with NM restart

2017-05-03 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6555:
-
Description: 
If timeline service v2 is enabled and NM is restarted with recovery enabled, 
then NM fails to start and throws an error as  "flow context can't be null".

This is happening because the flow context did not exist before but now that 
timeline service v2 is enabled, ApplicationImpl expects it to exist. 

This would also happen even if flow context existed before but since we are not 
persisting it / reading it during ContainerManagerImpl#recoverApplication, it 
does not get passed in to ApplicationImpl.


full stack trace
{code}
2017-05-03 21:51:52,178 FATAL 
org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting 
NodeManager
java.lang.IllegalArgumentException: flow context cannot be null
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.(ApplicationImpl.java:104)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.(ApplicationImpl.java:90)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recoverApplication(ContainerManagerImpl.java:318)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recover(ContainerManagerImpl.java:280)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:267)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at 
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:276)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:588)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:649)
{code}




  was:

If timeline service v2 is enabled and NM is restarted with recovery enabled, 
then NM fails to start and throws an error as  "flow context can't be null".

This is happening because the flow context did not exist before but now that 
timeline service v2 is enabled, ApplicationImpl expects it to exist. 

This would also happen even if flow context existed before but since we are not 
persisting it / reading it during ContainerManagerImpl#recoverApplication, it 
does not get passed in to ApplicationImpl.


{code}

full stack trace
{code}
2017-05-03 21:51:52,178 FATAL 
org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting 
NodeManager
java.lang.IllegalArgumentException: flow context cannot be null
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.(ApplicationImpl.java:104)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.(ApplicationImpl.java:90)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recoverApplication(ContainerManagerImpl.java:318)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recover(ContainerManagerImpl.java:280)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:267)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at 
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:276)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:588)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:649)
{code}

{code}


> Enable flow context read (& corresponding write) for recovering application 
> with NM restart 
> 
>
> Key: YARN-6555
> URL: https://issues.apache.org/jira/browse/YARN-6555
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>
> If timeline service v2 is enabled and NM is restarted with recovery enabled, 
> then NM fails to start and throws an error as  "flow context can't be null".
> This is happening because the flow context did not exist before but now that 
> timeline service v2 is enabled, ApplicationImpl expects it to exist. 
> This would also happen even if flow context existed before but since we are 
> not 

[jira] [Created] (YARN-6555) Enable flow context read (& corresponding write) for recovering application with NM restart

2017-05-03 Thread Vrushali C (JIRA)
Vrushali C created YARN-6555:


 Summary: Enable flow context read (& corresponding write) for 
recovering application with NM restart 
 Key: YARN-6555
 URL: https://issues.apache.org/jira/browse/YARN-6555
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Vrushali C
Assignee: Vrushali C



If timeline service v2 is enabled and NM is restarted with recovery enabled, 
then NM fails to start and throws an error as  "flow context can't be null".

This is happening because the flow context did not exist before but now that 
timeline service v2 is enabled, ApplicationImpl expects it to exist. 

This would also happen even if flow context existed before but since we are not 
persisting it / reading it during ContainerManagerImpl#recoverApplication, it 
does not get passed in to ApplicationImpl.


{code}

full stack trace
{code}
2017-05-03 21:51:52,178 FATAL 
org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting 
NodeManager
java.lang.IllegalArgumentException: flow context cannot be null
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.(ApplicationImpl.java:104)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.(ApplicationImpl.java:90)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recoverApplication(ContainerManagerImpl.java:318)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recover(ContainerManagerImpl.java:280)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:267)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at 
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:276)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:588)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:649)
{code}

{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6457) Allow custom SSL configuration to be supplied in WebApps

2017-05-03 Thread Sanjay M Pujare (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995922#comment-15995922
 ] 

Sanjay M Pujare commented on YARN-6457:
---

Hi [~haibochen], I have verified the latest fix suggested by 
[~pra...@datatorrent.com] that it works. If you are okay, I will modify my PR 
and it can then be merged. Pls let me know

> Allow custom SSL configuration to be supplied in WebApps
> 
>
> Key: YARN-6457
> URL: https://issues.apache.org/jira/browse/YARN-6457
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp, yarn
>Reporter: Sanjay M Pujare
>Assignee: Sanjay M Pujare
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Currently a custom SSL store cannot be passed on to WebApps which forces the 
> embedded web-server to use the default keystore set up in ssl-server.xml for 
> the whole Hadoop cluster. There are cases where the Hadoop app needs to use 
> its own/custom keystore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6111) Rumen input does't work in SLS

2017-05-03 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6111:
---
Issue Type: Sub-task  (was: Bug)
Parent: YARN-5065

> Rumen input does't work in SLS
> --
>
> Key: YARN-6111
> URL: https://issues.apache.org/jira/browse/YARN-6111
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 2.6.0, 2.7.3, 3.0.0-alpha2
> Environment: ubuntu14.0.4 os
>Reporter: YuJie Huang
>  Labels: test
>
> Hi guys,
> I am trying to learn the use of SLS.
> I would like to get the file realtimetrack.json, but this it only 
> contains "[]" at the end of a simulation. This is the command I use to 
> run the instance:
> HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json 
> --output-dir=sample-data 
> All other files, including metrics, appears to be properly populated.I can 
> also trace with web:http://localhost:10001/simulate
> Can someone help?
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6374) Improve test coverage and add utility classes for common Docker operations

2017-05-03 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995918#comment-15995918
 ] 

Shane Kumpf commented on YARN-6374:
---

I've uploaded a new patch that fixes the test issue with branch-2. Thanks 
[~sidharta-s]!

> Improve test coverage and add utility classes for common Docker operations
> --
>
> Key: YARN-6374
> URL: https://issues.apache.org/jira/browse/YARN-6374
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6374.001.patch, YARN-6374.002.patch, 
> YARN-6374.003.patch, YARN-6374-branch-2.001.patch
>
>
> Currently, it is tedious to execute Docker related operations due to the 
> plumbing needed to define the DockerCommand, writing the command file, 
> configuring privileged operation, and finally executing the command and 
> validating the result. Obtaining the current status of a Docker container can 
> also be improved. Finally, the test coverage is lacking for Docker Commands. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6374) Improve test coverage and add utility classes for common Docker operations

2017-05-03 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-6374:
--
Attachment: YARN-6374-branch-2.001.patch

> Improve test coverage and add utility classes for common Docker operations
> --
>
> Key: YARN-6374
> URL: https://issues.apache.org/jira/browse/YARN-6374
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6374.001.patch, YARN-6374.002.patch, 
> YARN-6374.003.patch, YARN-6374-branch-2.001.patch
>
>
> Currently, it is tedious to execute Docker related operations due to the 
> plumbing needed to define the DockerCommand, writing the command file, 
> configuring privileged operation, and finally executing the command and 
> validating the result. Obtaining the current status of a Docker container can 
> also be improved. Finally, the test coverage is lacking for Docker Commands. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6522) Make SLS JSON input file format simple and scalable

2017-05-03 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995917#comment-15995917
 ] 

Yufei Gu commented on YARN-6522:


# Patch v3 fixes style issue. 
# YARN-6506 is for the bugs found by findbugs.
# The unit test failure is caused by rumen input format, which doesn't work for 
a long time.  YARN-6111 is the jira to track that issue. 

> Make SLS JSON input file format simple and scalable
> ---
>
> Key: YARN-6522
> URL: https://issues.apache.org/jira/browse/YARN-6522
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6522.001.patch, YARN-6522.002.patch, 
> YARN-6522.003.patch
>
>
> SLS input format is verbose, and it doesn't scale out. We can improve it in 
> these ways:
> # We need to configure tasks one by one if there are more than one task in a 
> job, which means the job configuration usually includes lots of redundant 
> items. To specify the number of task for task configuration will solve this 
> issue.
> # Container host is useful for locality testing. It is obnoxious to specify 
> container host for each task for tests unrelated to locality. We would like 
> to make it optional.
> # For most tests, we don't care about job.id. Make it optional and generated 
> automatically by default.
> # job.finish.ms doesn't make sense, just remove it.
> # container type and container priority should be optional as well. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6111) Rumen input does't work in SLS

2017-05-03 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995916#comment-15995916
 ] 

Yufei Gu commented on YARN-6111:


Confirmed that rumen input format does't work. 

> Rumen input does't work in SLS
> --
>
> Key: YARN-6111
> URL: https://issues.apache.org/jira/browse/YARN-6111
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 2.6.0, 2.7.3, 3.0.0-alpha2
> Environment: ubuntu14.0.4 os
>Reporter: YuJie Huang
>  Labels: test
>
> Hi guys,
> I am trying to learn the use of SLS.
> I would like to get the file realtimetrack.json, but this it only 
> contains "[]" at the end of a simulation. This is the command I use to 
> run the instance:
> HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json 
> --output-dir=sample-data 
> All other files, including metrics, appears to be properly populated.I can 
> also trace with web:http://localhost:10001/simulate
> Can someone help?
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6111) Rumen input does't work in SLS

2017-05-03 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6111:
---
Affects Version/s: 3.0.0-alpha2

> Rumen input does't work in SLS
> --
>
> Key: YARN-6111
> URL: https://issues.apache.org/jira/browse/YARN-6111
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 2.6.0, 2.7.3, 3.0.0-alpha2
> Environment: ubuntu14.0.4 os
>Reporter: YuJie Huang
>  Labels: test
>
> Hi guys,
> I am trying to learn the use of SLS.
> I would like to get the file realtimetrack.json, but this it only 
> contains "[]" at the end of a simulation. This is the command I use to 
> run the instance:
> HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json 
> --output-dir=sample-data 
> All other files, including metrics, appears to be properly populated.I can 
> also trace with web:http://localhost:10001/simulate
> Can someone help?
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6111) Rumen input does't work in SLS

2017-05-03 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6111:
---
Fix Version/s: (was: 2.7.3)

> Rumen input does't work in SLS
> --
>
> Key: YARN-6111
> URL: https://issues.apache.org/jira/browse/YARN-6111
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 2.6.0, 2.7.3, 3.0.0-alpha2
> Environment: ubuntu14.0.4 os
>Reporter: YuJie Huang
>  Labels: test
>
> Hi guys,
> I am trying to learn the use of SLS.
> I would like to get the file realtimetrack.json, but this it only 
> contains "[]" at the end of a simulation. This is the command I use to 
> run the instance:
> HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json 
> --output-dir=sample-data 
> All other files, including metrics, appears to be properly populated.I can 
> also trace with web:http://localhost:10001/simulate
> Can someone help?
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6111) Rumen input does't work in SLS

2017-05-03 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6111:
---
Summary: Rumen input does't work in SLS  (was: [SLS] The realtimetrack.json 
is empty )

> Rumen input does't work in SLS
> --
>
> Key: YARN-6111
> URL: https://issues.apache.org/jira/browse/YARN-6111
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 2.6.0, 2.7.3
> Environment: ubuntu14.0.4 os
>Reporter: YuJie Huang
>  Labels: test
> Fix For: 2.7.3
>
>
> Hi guys,
> I am trying to learn the use of SLS.
> I would like to get the file realtimetrack.json, but this it only 
> contains "[]" at the end of a simulation. This is the command I use to 
> run the instance:
> HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json 
> --output-dir=sample-data 
> All other files, including metrics, appears to be properly populated.I can 
> also trace with web:http://localhost:10001/simulate
> Can someone help?
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6111) [SLS] The realtimetrack.json is empty

2017-05-03 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6111:
---
Component/s: scheduler-load-simulator

> [SLS] The realtimetrack.json is empty 
> --
>
> Key: YARN-6111
> URL: https://issues.apache.org/jira/browse/YARN-6111
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 2.6.0, 2.7.3
> Environment: ubuntu14.0.4 os
>Reporter: YuJie Huang
>  Labels: test
> Fix For: 2.7.3
>
>
> Hi guys,
> I am trying to learn the use of SLS.
> I would like to get the file realtimetrack.json, but this it only 
> contains "[]" at the end of a simulation. This is the command I use to 
> run the instance:
> HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json 
> --output-dir=sample-data 
> All other files, including metrics, appears to be properly populated.I can 
> also trace with web:http://localhost:10001/simulate
> Can someone help?
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6522) Make SLS JSON input file format simple and scalable

2017-05-03 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6522:
---
Attachment: YARN-6522.003.patch

> Make SLS JSON input file format simple and scalable
> ---
>
> Key: YARN-6522
> URL: https://issues.apache.org/jira/browse/YARN-6522
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6522.001.patch, YARN-6522.002.patch, 
> YARN-6522.003.patch
>
>
> SLS input format is verbose, and it doesn't scale out. We can improve it in 
> these ways:
> # We need to configure tasks one by one if there are more than one task in a 
> job, which means the job configuration usually includes lots of redundant 
> items. To specify the number of task for task configuration will solve this 
> issue.
> # Container host is useful for locality testing. It is obnoxious to specify 
> container host for each task for tests unrelated to locality. We would like 
> to make it optional.
> # For most tests, we don't care about job.id. Make it optional and generated 
> automatically by default.
> # job.finish.ms doesn't make sense, just remove it.
> # container type and container priority should be optional as well. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6374) Improve test coverage and add utility classes for common Docker operations

2017-05-03 Thread Sidharta Seethana (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sidharta Seethana updated YARN-6374:

Fix Version/s: 3.0.0-alpha3

> Improve test coverage and add utility classes for common Docker operations
> --
>
> Key: YARN-6374
> URL: https://issues.apache.org/jira/browse/YARN-6374
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6374.001.patch, YARN-6374.002.patch, 
> YARN-6374.003.patch
>
>
> Currently, it is tedious to execute Docker related operations due to the 
> plumbing needed to define the DockerCommand, writing the command file, 
> configuring privileged operation, and finally executing the command and 
> validating the result. Obtaining the current status of a Docker container can 
> also be improved. Finally, the test coverage is lacking for Docker Commands. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5597) YARN Federation phase 2

2017-05-03 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan reassigned YARN-5597:


Assignee: Subru Krishnan

> YARN Federation phase 2
> ---
>
> Key: YARN-5597
> URL: https://issues.apache.org/jira/browse/YARN-5597
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>
> This umbrella JIRA tracks set of improvements over the YARN Federation MVP 
> (YARN-2915)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5411) Create a proxy chain for ApplicationClientProtocol in the Router

2017-05-03 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-5411:
---
Attachment: YARN-5411-YARN-2915.v7.patch

Fixed logger by using slf4j and added hadoop-common test dependency for Router 
tests.

> Create a proxy chain for ApplicationClientProtocol in the Router
> 
>
> Key: YARN-5411
> URL: https://issues.apache.org/jira/browse/YARN-5411
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5411-YARN-2915.v1.patch, 
> YARN-5411-YARN-2915.v2.patch, YARN-5411-YARN-2915.v3.patch, 
> YARN-5411-YARN-2915.v4.patch, YARN-5411-YARN-2915.v5.patch, 
> YARN-5411-YARN-2915.v6.patch, YARN-5411-YARN-2915.v7.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a proxy for ApplicationClientProtocol in the 
> Router. This provides a placeholder for:
> 1) throttling mis-behaving clients (YARN-1546)
> 3) mask the access to multiple RMs (YARN-3659)
> We are planning to follow the interceptor pattern like we did in YARN-2884 to 
> generalize the approach and have only dynamically coupling for Federation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6374) Improve test coverage and add utility classes for common Docker operations

2017-05-03 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995823#comment-15995823
 ] 

Sidharta Seethana commented on YARN-6374:
-

[~shaneku...@gmail.com], thanks for the updated patch. It looks good to me. I 
have checked this patch into trunk. However, I am running into compilation 
errors if I cherry-pick the patch into branch-2 - could you please take a look? 

> Improve test coverage and add utility classes for common Docker operations
> --
>
> Key: YARN-6374
> URL: https://issues.apache.org/jira/browse/YARN-6374
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-6374.001.patch, YARN-6374.002.patch, 
> YARN-6374.003.patch
>
>
> Currently, it is tedious to execute Docker related operations due to the 
> plumbing needed to define the DockerCommand, writing the command file, 
> configuring privileged operation, and finally executing the command and 
> validating the result. Obtaining the current status of a Docker container can 
> also be improved. Finally, the test coverage is lacking for Docker Commands. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5411) Create a proxy chain for ApplicationClientProtocol in the Router

2017-05-03 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-5411:
---
Attachment: (was: YARN-5411-YARN-2915.v7.patch)

> Create a proxy chain for ApplicationClientProtocol in the Router
> 
>
> Key: YARN-5411
> URL: https://issues.apache.org/jira/browse/YARN-5411
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5411-YARN-2915.v1.patch, 
> YARN-5411-YARN-2915.v2.patch, YARN-5411-YARN-2915.v3.patch, 
> YARN-5411-YARN-2915.v4.patch, YARN-5411-YARN-2915.v5.patch, 
> YARN-5411-YARN-2915.v6.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a proxy for ApplicationClientProtocol in the 
> Router. This provides a placeholder for:
> 1) throttling mis-behaving clients (YARN-1546)
> 3) mask the access to multiple RMs (YARN-3659)
> We are planning to follow the interceptor pattern like we did in YARN-2884 to 
> generalize the approach and have only dynamically coupling for Federation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5411) Create a proxy chain for ApplicationClientProtocol in the Router

2017-05-03 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-5411:
---
Attachment: YARN-5411-YARN-2915.v7.patch

> Create a proxy chain for ApplicationClientProtocol in the Router
> 
>
> Key: YARN-5411
> URL: https://issues.apache.org/jira/browse/YARN-5411
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5411-YARN-2915.v1.patch, 
> YARN-5411-YARN-2915.v2.patch, YARN-5411-YARN-2915.v3.patch, 
> YARN-5411-YARN-2915.v4.patch, YARN-5411-YARN-2915.v5.patch, 
> YARN-5411-YARN-2915.v6.patch, YARN-5411-YARN-2915.v7.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a proxy for ApplicationClientProtocol in the 
> Router. This provides a placeholder for:
> 1) throttling mis-behaving clients (YARN-1546)
> 3) mask the access to multiple RMs (YARN-3659)
> We are planning to follow the interceptor pattern like we did in YARN-2884 to 
> generalize the approach and have only dynamically coupling for Federation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6374) Improve test coverage and add utility classes for common Docker operations

2017-05-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995813#comment-15995813
 ] 

Hudson commented on YARN-6374:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11680 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11680/])
YARN-6374. Improve test coverage and add utility classes for common (sidharta: 
rev fd5cb2c9468070abdea3305974ecfc3aa4b0be12)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/MockPrivilegedOperationCaptor.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/package-info.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerCommandExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestDockerContainerRuntime.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerLoadCommand.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerCommandExecutor.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerRunCommand.java


> Improve test coverage and add utility classes for common Docker operations
> --
>
> Key: YARN-6374
> URL: https://issues.apache.org/jira/browse/YARN-6374
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-6374.001.patch, YARN-6374.002.patch, 
> YARN-6374.003.patch
>
>
> Currently, it is tedious to execute Docker related operations due to the 
> plumbing needed to define the DockerCommand, writing the command file, 
> configuring privileged operation, and finally executing the command and 
> validating the result. Obtaining the current status of a Docker container can 
> also be improved. Finally, the test coverage is lacking for Docker Commands. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5411) Create a proxy chain for ApplicationClientProtocol in the Router

2017-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995752#comment-15995752
 ] 

Hadoop QA commented on YARN-5411:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
49s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
55s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
6s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
 4s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
5s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
YARN-2915 has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
50s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in YARN-2915 has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
35s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m  2s{color} 
| {color:red} root generated 1 new + 777 unchanged - 1 fixed = 778 total (was 
778) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} root: The patch generated 0 new + 207 unchanged - 4 
fixed = 207 total (was 211) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
35s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
23s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 28s{color} 
| 

[jira] [Commented] (YARN-3053) [Security] Review and implement authentication in ATS v.2

2017-05-03 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995692#comment-15995692
 ] 

Jason Lowe commented on YARN-3053:
--

Thanks for updating the document, Varun!  I think the approach is reasonable, 
since it piggybacks on the discovery problem which already needed to be solved. 
 Also I think it makes sense that we don't need to persist the tokens in any 
way, since the collector needs to be re-discovered if restarted and new tokens 
can be handed out at that point.

Not really a security concern, but I'm assuming the ATSv2 client is going to 
have to buffer/spool events until the collector has been discovered or there's 
some kind of flow control mitigation there.  By default the AM is being started 
with no way to write events until the collector is discovered (which could take 
some number of heartbeats given the circuitous route the information takes) and 
there's also the case where the collector becomes unavailable temporarily 
(e.g.: collector restarts/crashes/etc.).


> [Security] Review and implement authentication in ATS v.2
> -
>
> Key: YARN-3053
> URL: https://issues.apache.org/jira/browse/YARN-3053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: ATSv2Authentication(draft).pdf, 
> ATSv2Authentication.v01.pdf
>
>
> Per design in YARN-2928, we want to evaluate and review the system for 
> security, and ensure proper security in the system.
> This includes proper authentication, token management, access control, and 
> any other relevant security aspects.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6554) ResourceManagerRest.md should document and "{{", "}}" meanings

2017-05-03 Thread Grant Sohn (JIRA)
Grant Sohn created YARN-6554:


 Summary: ResourceManagerRest.md should document  and "{{", 
"}}" meanings
 Key: YARN-6554
 URL: https://issues.apache.org/jira/browse/YARN-6554
 Project: Hadoop YARN
  Issue Type: Bug
  Components: site
Reporter: Grant Sohn
Priority: Trivial


The docs should mention the meaning of , "{{" and "}}".  These are 
explained fully in the code.

{noformat}
063  /**
064   * This constant is used to construct class path and it will be 
replaced with
065   * real class path separator(':' for Linux and ';' for Windows) by
066   * NodeManager on container launch. User has to use this constant to 
construct
067   * class path if user wants cross-platform practice i.e. submit an 
application
068   * from a Windows client to a Linux/Unix server or vice versa.
069   */
070  @Public
071  @Unstable
072  public static final String CLASS_PATH_SEPARATOR= "";
073
074  /**
075   * The following two constants are used to expand parameter and it 
will be
076   * replaced with real parameter expansion marker ('%' for Windows and 
'$' for
077   * Linux) by NodeManager on container launch. For example: {{VAR}} 
will be
078   * replaced as $VAR on Linux, and %VAR% on Windows. User has to use 
this
079   * constant to construct class path if user wants cross-platform 
practice i.e.
080   * submit an application from a Windows client to a Linux/Unix server 
or vice
081   * versa.
082   */
083  @Public
084  @Unstable
085  public static final String PARAMETER_EXPANSION_LEFT="{{";
086
087  /**
088   * User has to use this constant to construct class path if user wants
089   * cross-platform practice i.e. submit an application from a Windows 
client to
090   * a Linux/Unix server or vice versa.
091   */
092  @Public
093  @Unstable
094  public static final String PARAMETER_EXPANSION_RIGHT="}}";
095
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5411) Create a proxy chain for ApplicationClientProtocol in the Router

2017-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995629#comment-15995629
 ] 

Hadoop QA commented on YARN-5411:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
25s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
43s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
59s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
57s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
 6s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
6s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
YARN-2915 has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
51s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in YARN-2915 has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
35s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m 21s{color} 
| {color:red} root generated 1 new + 777 unchanged - 1 fixed = 778 total (was 
778) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
12s{color} | {color:green} root: The patch generated 0 new + 207 unchanged - 4 
fixed = 207 total (was 211) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
44s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
26s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
20s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 30s{color} 
| 

[jira] [Comment Edited] (YARN-5411) Create a proxy chain for ApplicationClientProtocol in the Router

2017-05-03 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995511#comment-15995511
 ] 

Giovanni Matteo Fumarola edited comment on YARN-5411 at 5/3/17 7:44 PM:


Thanks [~botong], I added log4j and removed TestRouter.
The Findbugs and javac errors are not related to the patch.


was (Author: giovanni.fumarola):
Thanks [~botong], I added log4j and remove TestRouter.
The Findbugs and javac errors are not related to the patch.

> Create a proxy chain for ApplicationClientProtocol in the Router
> 
>
> Key: YARN-5411
> URL: https://issues.apache.org/jira/browse/YARN-5411
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5411-YARN-2915.v1.patch, 
> YARN-5411-YARN-2915.v2.patch, YARN-5411-YARN-2915.v3.patch, 
> YARN-5411-YARN-2915.v4.patch, YARN-5411-YARN-2915.v5.patch, 
> YARN-5411-YARN-2915.v6.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a proxy for ApplicationClientProtocol in the 
> Router. This provides a placeholder for:
> 1) throttling mis-behaving clients (YARN-1546)
> 3) mask the access to multiple RMs (YARN-3659)
> We are planning to follow the interceptor pattern like we did in YARN-2884 to 
> generalize the approach and have only dynamically coupling for Federation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5411) Create a proxy chain for ApplicationClientProtocol in the Router

2017-05-03 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995511#comment-15995511
 ] 

Giovanni Matteo Fumarola commented on YARN-5411:


Thanks [~botong], I added log4j and remove TestRouter.
The Findbugs and javac errors are not related to the patch.

> Create a proxy chain for ApplicationClientProtocol in the Router
> 
>
> Key: YARN-5411
> URL: https://issues.apache.org/jira/browse/YARN-5411
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5411-YARN-2915.v1.patch, 
> YARN-5411-YARN-2915.v2.patch, YARN-5411-YARN-2915.v3.patch, 
> YARN-5411-YARN-2915.v4.patch, YARN-5411-YARN-2915.v5.patch, 
> YARN-5411-YARN-2915.v6.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a proxy for ApplicationClientProtocol in the 
> Router. This provides a placeholder for:
> 1) throttling mis-behaving clients (YARN-1546)
> 3) mask the access to multiple RMs (YARN-3659)
> We are planning to follow the interceptor pattern like we did in YARN-2884 to 
> generalize the approach and have only dynamically coupling for Federation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5411) Create a proxy chain for ApplicationClientProtocol in the Router

2017-05-03 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-5411:
---
Attachment: YARN-5411-YARN-2915.v6.patch

> Create a proxy chain for ApplicationClientProtocol in the Router
> 
>
> Key: YARN-5411
> URL: https://issues.apache.org/jira/browse/YARN-5411
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5411-YARN-2915.v1.patch, 
> YARN-5411-YARN-2915.v2.patch, YARN-5411-YARN-2915.v3.patch, 
> YARN-5411-YARN-2915.v4.patch, YARN-5411-YARN-2915.v5.patch, 
> YARN-5411-YARN-2915.v6.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a proxy for ApplicationClientProtocol in the 
> Router. This provides a placeholder for:
> 1) throttling mis-behaving clients (YARN-1546)
> 3) mask the access to multiple RMs (YARN-3659)
> We are planning to follow the interceptor pattern like we did in YARN-2884 to 
> generalize the approach and have only dynamically coupling for Federation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5411) Create a proxy chain for ApplicationClientProtocol in the Router

2017-05-03 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995443#comment-15995443
 ] 

Botong Huang commented on YARN-5411:


Please change the Log in {{MockResourceManagerFacade}} to Log4j, thx

> Create a proxy chain for ApplicationClientProtocol in the Router
> 
>
> Key: YARN-5411
> URL: https://issues.apache.org/jira/browse/YARN-5411
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5411-YARN-2915.v1.patch, 
> YARN-5411-YARN-2915.v2.patch, YARN-5411-YARN-2915.v3.patch, 
> YARN-5411-YARN-2915.v4.patch, YARN-5411-YARN-2915.v5.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a proxy for ApplicationClientProtocol in the 
> Router. This provides a placeholder for:
> 1) throttling mis-behaving clients (YARN-1546)
> 3) mask the access to multiple RMs (YARN-3659)
> We are planning to follow the interceptor pattern like we did in YARN-2884 to 
> generalize the approach and have only dynamically coupling for Federation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5411) Create a proxy chain for ApplicationClientProtocol in the Router

2017-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995429#comment-15995429
 ] 

Hadoop QA commented on YARN-5411:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
37s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
31s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
35s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
45s{color} | {color:green} YARN-2915 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
YARN-2915 has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
48s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in YARN-2915 has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  8m 52s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 1 new + 40 unchanged - 
1 fixed = 41 total (was 41) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 207 unchanged - 4 fixed = 207 total (was 211) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
26s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
16s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 22s{color} 
| {color:red} hadoop-yarn-server-router in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | 

[jira] [Commented] (YARN-679) add an entry point that can start any Yarn service

2017-05-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995412#comment-15995412
 ] 

Hudson commented on YARN-679:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11679 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11679/])
HDFS-11739. Fix regression in tests caused by YARN-679. Contributed by 
(liuml07: rev 83dded556dc179fcff078451fb80533065e116f0)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetadataVersionOutput.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ExitUtil.java
* (edit) 
hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestGridmixSubmission.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStartup.java


> add an entry point that can start any Yarn service
> --
>
> Key: YARN-679
> URL: https://issues.apache.org/jira/browse/YARN-679
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: org.apache.hadoop.servic...mon 3.0.0-SNAPSHOT API).pdf, 
> YARN-679-001.patch, YARN-679-002.patch, YARN-679-002.patch, 
> YARN-679-003.patch, YARN-679-004.patch, YARN-679-005.patch, 
> YARN-679-006.patch, YARN-679-007.patch, YARN-679-008.patch, 
> YARN-679-009.patch, YARN-679-010.patch, YARN-679-011.patch, YARN-679-013.patch
>
>  Time Spent: 72h
>  Remaining Estimate: 0h
>
> There's no need to write separate .main classes for every Yarn service, given 
> that the startup mechanism should be identical: create, init, start, wait for 
> stopped -with an interrupt handler to trigger a clean shutdown on a control-c 
> interrupt.
> Provide one that takes any classname, and a list of config files/options



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6553) Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests

2017-05-03 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6553:
-
Description: Currently the AMRMProxy and Router tests use the 
{{MockResourceManagerFacade}}. This jira proposes replacing it with {{MockRM}} 
as is done in majority of the tests.

> Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests
> 
>
> Key: YARN-6553
> URL: https://issues.apache.org/jira/browse/YARN-6553
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>
> Currently the AMRMProxy and Router tests use the 
> {{MockResourceManagerFacade}}. This jira proposes replacing it with 
> {{MockRM}} as is done in majority of the tests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6553) Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests

2017-05-03 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6553:
-
Summary: Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router 
tests  (was: Remove MockResourceManagerFacade and use MockRM for 
AMRMProxy/Router tests)

> Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests
> 
>
> Key: YARN-6553
> URL: https://issues.apache.org/jira/browse/YARN-6553
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5411) Create a proxy chain for ApplicationClientProtocol in the Router

2017-05-03 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995394#comment-15995394
 ] 

Giovanni Matteo Fumarola commented on YARN-5411:


Thanks [~subru]. I fixed the version in pom.xml and the missed suggestion.

About this:
bq. A clarification regarding MockResourceManagerFacade - why do we need it as 
there's MockRM already which is widely used in tests. IF we do need it, call 
out in the class Javadocs that it's used by AMRMProxy/Router tests.
It is a good idea, but it requires additional work. I am opening a JIRA under 
FederationV2 to track this: YARN-6553

> Create a proxy chain for ApplicationClientProtocol in the Router
> 
>
> Key: YARN-5411
> URL: https://issues.apache.org/jira/browse/YARN-5411
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5411-YARN-2915.v1.patch, 
> YARN-5411-YARN-2915.v2.patch, YARN-5411-YARN-2915.v3.patch, 
> YARN-5411-YARN-2915.v4.patch, YARN-5411-YARN-2915.v5.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a proxy for ApplicationClientProtocol in the 
> Router. This provides a placeholder for:
> 1) throttling mis-behaving clients (YARN-1546)
> 3) mask the access to multiple RMs (YARN-3659)
> We are planning to follow the interceptor pattern like we did in YARN-2884 to 
> generalize the approach and have only dynamically coupling for Federation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6553) Remove MockResourceManagerFacade and use MockRM for AMRMProxy/Router tests

2017-05-03 Thread Giovanni Matteo Fumarola (JIRA)
Giovanni Matteo Fumarola created YARN-6553:
--

 Summary: Remove MockResourceManagerFacade and use MockRM for 
AMRMProxy/Router tests
 Key: YARN-6553
 URL: https://issues.apache.org/jira/browse/YARN-6553
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Giovanni Matteo Fumarola






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5411) Create a proxy chain for ApplicationClientProtocol in the Router

2017-05-03 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-5411:
---
Attachment: YARN-5411-YARN-2915.v5.patch

> Create a proxy chain for ApplicationClientProtocol in the Router
> 
>
> Key: YARN-5411
> URL: https://issues.apache.org/jira/browse/YARN-5411
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5411-YARN-2915.v1.patch, 
> YARN-5411-YARN-2915.v2.patch, YARN-5411-YARN-2915.v3.patch, 
> YARN-5411-YARN-2915.v4.patch, YARN-5411-YARN-2915.v5.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a proxy for ApplicationClientProtocol in the 
> Router. This provides a placeholder for:
> 1) throttling mis-behaving clients (YARN-1546)
> 3) mask the access to multiple RMs (YARN-3659)
> We are planning to follow the interceptor pattern like we did in YARN-2884 to 
> generalize the approach and have only dynamically coupling for Federation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5411) Create a proxy chain for ApplicationClientProtocol in the Router

2017-05-03 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995281#comment-15995281
 ] 

Subru Krishnan edited comment on YARN-5411 at 5/3/17 6:13 PM:
--

Thanks [~giovanni.fumarola] for addressing my comments. I have manually kicked 
Yetus as it didn't pick up the latest patch. 

Regarding the version in pom.xml, you should add the test-jar in the hadoop 
main pom 
[here|https://github.com/apache/hadoop/blob/trunk/hadoop-project/pom.xml] and 
then the version gets automatically inherited.

You seemed to have missed this suggestion:
bq. MockClientRequestInterceptor should extend DefaultClientRequestInterceptor 
and only override init.

A clarification regarding {{MockResourceManagerFacade}} - why do we need it as 
there's {{MockRM}} already which is widely used in tests. IF we do need it, 
call out in the class Javadocs that it's used by {{AMRMProxy/Router}} tests.



was (Author: subru):
Thanks [~giovanni.fumarola] for addressing my comments. I have manually kicked 
Yetus as it didn't pick up the latest patch. 

Regarding the version in pom.xml, you should add the test-jar in the hadoop 
main pom 
[here|https://github.com/apache/hadoop/blob/trunk/hadoop-project/pom.xml] and 
then the version gets automatically inherited.

You seemed to have missed this suggestion:
bq. MockClientRequestInterceptor should extend DefaultClientRequestInterceptor 
and only override init.


> Create a proxy chain for ApplicationClientProtocol in the Router
> 
>
> Key: YARN-5411
> URL: https://issues.apache.org/jira/browse/YARN-5411
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5411-YARN-2915.v1.patch, 
> YARN-5411-YARN-2915.v2.patch, YARN-5411-YARN-2915.v3.patch, 
> YARN-5411-YARN-2915.v4.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a proxy for ApplicationClientProtocol in the 
> Router. This provides a placeholder for:
> 1) throttling mis-behaving clients (YARN-1546)
> 3) mask the access to multiple RMs (YARN-3659)
> We are planning to follow the interceptor pattern like we did in YARN-2884 to 
> generalize the approach and have only dynamically coupling for Federation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6545) Followup fix for YARN-6405

2017-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995332#comment-15995332
 ] 

Hadoop QA commented on YARN-6545:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
37s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
53s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 in yarn-native-services has 3 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 25s{color} 
| {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core
 generated 5 new + 25 unchanged - 5 fixed = 30 total (was 30) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core:
 The patch generated 2 new + 175 unchanged - 5 fixed = 177 total (was 180) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 29s{color} 
| {color:red} hadoop-yarn-slider-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
slider.server.appmaster.timelineservice.TestServiceTimelinePublisher |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6545 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866222/YARN-6545.yarn-native-services.02.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 62f783edea9b 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / e238402 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Updated] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-05-03 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-2113:
--
Attachment: YARN-2113 Intra-QueuePreemption Behavior.pdf

Attaching a doc which captures various preemption order policies and its 
behavior.

[~eepayne]

As per this 
[comment|https://issues.apache.org/jira/browse/YARN-2113?focusedCommentId=15985032=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15985032],
 I assumed that preemption has to occur based on priority even when 
USERLIMIT_FIRST is configured. Yes, preemption will happen for users who are 
above its UL. But given example in tat comment was pointing a case where user's 
used were under UL and preemption was needed or not.

Hence I added the check which you have mentioned in above comment to have 
priority preemption even if user.used is lesser than UL and USERLIMIT_FIRST is 
configured. Now as per the attached doc, we are clear that what should be the 
behavior for each mode. So I will be removing that check in next patch which is 
not needed.

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, 
> YARN-2113.0013.patch, YARN-2113.apply.onto.0012.ericp.patch, YARN-2113 
> Intra-QueuePreemption Behavior.pdf, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6552) Increase YARN test timeouts from 1 second to 10 seconds

2017-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995294#comment-15995294
 ] 

Hadoop QA commented on YARN-6552:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 101 unchanged - 1 fixed = 102 total (was 102) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m  8s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 39s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
|   | hadoop.yarn.client.api.impl.TestAMRMClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6552 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866204/YARN-6552.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8ae76bdead5f 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d4631e4 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15810/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| unit | 

[jira] [Commented] (YARN-5411) Create a proxy chain for ApplicationClientProtocol in the Router

2017-05-03 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995281#comment-15995281
 ] 

Subru Krishnan commented on YARN-5411:
--

Thanks [~giovanni.fumarola] for addressing my comments. I have manually kicked 
Yetus as it didn't pick up the latest patch. 

Regarding the version in pom.xml, you should add the test-jar in the hadoop 
main pom 
[here|https://github.com/apache/hadoop/blob/trunk/hadoop-project/pom.xml] and 
then the version gets automatically inherited.

You seemed to have missed this suggestion:
bq. MockClientRequestInterceptor should extend DefaultClientRequestInterceptor 
and only override init.


> Create a proxy chain for ApplicationClientProtocol in the Router
> 
>
> Key: YARN-5411
> URL: https://issues.apache.org/jira/browse/YARN-5411
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5411-YARN-2915.v1.patch, 
> YARN-5411-YARN-2915.v2.patch, YARN-5411-YARN-2915.v3.patch, 
> YARN-5411-YARN-2915.v4.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a proxy for ApplicationClientProtocol in the 
> Router. This provides a placeholder for:
> 1) throttling mis-behaving clients (YARN-1546)
> 3) mask the access to multiple RMs (YARN-3659)
> We are planning to follow the interceptor pattern like we did in YARN-2884 to 
> generalize the approach and have only dynamically coupling for Federation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6545) Followup fix for YARN-6405

2017-05-03 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-6545:
-
Attachment: YARN-6545.yarn-native-services.02.patch

[~jianhe], here is a new patch based on yours where I've fixed a couple of 
issues and added some tokens. I also fixed the problem of having the AppState 
in ProviderRole.

I ran a test app where I created a config file containing all the substitution 
tokens, so I think this is working.

> Followup fix for YARN-6405
> --
>
> Key: YARN-6545
> URL: https://issues.apache.org/jira/browse/YARN-6545
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6545.yarn-native-services.01.patch, 
> YARN-6545.yarn-native-services.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-05-03 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995184#comment-15995184
 ] 

Eric Payne commented on YARN-2113:
--

[~sunilg], would you mind, please, explaining why the check for 
{{usedMinusOneContainer > userLimit}} is necessary? If this code will skip if 
{{(used - currentContainer) > userLimit}}, why is it also necessary to check 
{{usedMinusOneContainer > userLimit}}? Won't that check happen when the 
youngest container _is_ the current container?
{code:title=FifoIntraQueuePreemptionPlugin#skipContainerBasedOnIntraQueuePolicy}
return Resources.lessThanOrEqual(rc, clusterResource,
Resources.subtract(usedResource, c.getAllocatedResource()), userLimit)
&& Resources.greaterThan(rc, clusterResource, usedMinusOneContainer,
userLimit)
&& context.getIntraQueuePreemptionOrder()
.equals(IntraQueuePreemptionOrder.USERLIMIT_FIRST);
  }
{code}

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, 
> YARN-2113.0013.patch, YARN-2113.apply.onto.0012.ericp.patch, 
> YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6552) Increase YARN test timeouts from 1 second to 10 seconds

2017-05-03 Thread Eric Badger (JIRA)
Eric Badger created YARN-6552:
-

 Summary: Increase YARN test timeouts from 1 second to 10 seconds
 Key: YARN-6552
 URL: https://issues.apache.org/jira/browse/YARN-6552
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Eric Badger
Assignee: Eric Badger


1 second test timeouts are susceptible to failure on overloaded or otherwise 
slow machines



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6375) App level aggregation should not consider metric values reported in the previous aggregation cycle

2017-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995095#comment-15995095
 ] 

Hadoop QA commented on YARN-6375:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
48s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} YARN-5355 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
28s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 in YARN-5355 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6375 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866194/YARN-6375-YARN-5355.02.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 15d58c1d884c 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 1f98134 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/15809/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15809/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15809/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> App level aggregation should not consider metric values reported in the 
> previous aggregation cycle
> 

[jira] [Updated] (YARN-6552) Increase YARN test timeouts from 1 second to 10 seconds

2017-05-03 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-6552:
--
Attachment: YARN-6552.001.patch

Uploading patch

> Increase YARN test timeouts from 1 second to 10 seconds
> ---
>
> Key: YARN-6552
> URL: https://issues.apache.org/jira/browse/YARN-6552
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-6552.001.patch
>
>
> 1 second test timeouts are susceptible to failure on overloaded or otherwise 
> slow machines



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6375) App level aggregation should not consider metric values reported in the previous aggregation cycle

2017-05-03 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-6375:
---
Attachment: YARN-6375-YARN-5355.02.patch

> App level aggregation should not consider metric values reported in the 
> previous aggregation cycle
> --
>
> Key: YARN-6375
> URL: https://issues.apache.org/jira/browse/YARN-6375
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-6375-YARN-5355.01.patch, 
> YARN-6375-YARN-5355.02.patch
>
>
> Currently app level aggregation is done every 15 seconds.
> And we consider last reported metric value for each entity belonging to an 
> app for aggregation.
> We however merely update the corresponding metric values for the entity on 
> put. We never remove the entries.
> But it is possible that multiple entities finish during lifetime of an 
> application. We however continue to consider them till the end.
> We should however not consider metric values of entities unless reported 
> within the 15 second period.
> Consider containers. For a long running app, several containers would start 
> and end at various times during the lifetime of an app.
> To consider metrics for all the containers throughout the lifetime of app, 
> hence wont be correct.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) RM requires large memory in sending out security tokens as part of Node Heartbeat in large cluster

2017-05-03 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15995030#comment-15995030
 ] 

Jason Lowe commented on YARN-6523:
--

Sending the full list at registration time makes a lot of sense to me, and I 
also think we can get the delta to work with some effort.  Note however that 
the delta is _per node_ not some global delta, because nodes may be 
heartbeating at drastically different times.  Therefore there isn't going to be 
a good way to build a single, pre-computed SystemCredentialsForAppsProto for 
deltas.  Each node will have to receive the app tokens that have been renewed 
since their last heartbeat, and that will be a different list than for other 
nodes in the cluster.  There will be many that will share the same delta, but 
it won't be the same for all of them.

Also note that there is going to be an interface change even with your 
proposal.  The current code assumes that the system credentials received in a 
heartbeat _replace_ the previous set of credentials.  If we suddenly start 
sending a delta in heartbeats instead of the full set then that's an 
incompatible semantic change even though the technical signature of the 
interface did not change.  Old nodemanagers during a rolling upgrade will not 
do the correct thing and apps could fail.  So minimally the RM would need to 
check the NM version and always send the full system credentials in each 
heartbeat if the NM version is "old" and only use the delta when the NM is 
beyond a certain version.

> RM requires large memory in sending out security tokens as part of Node 
> Heartbeat in large cluster
> --
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Critical
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6523) RM requires large memory in sending out security tokens as part of Node Heartbeat in large cluster

2017-05-03 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994995#comment-15994995
 ] 

Naganarasimha G R edited comment on YARN-6523 at 5/3/17 2:45 PM:
-

Sorry for the delay in response [~jlowe],
Thanks for the very detailed response. Agree that the delta approaches 
initially mentioned can introduce certain amount of complexity in the cases 
mentioned by you.
Though initially the approach mentioned by you was appealing and less 
complicated, i was thinking of following scenarios :
# When there are large number of small jobs in a large clsuter we almost send 
all the tokens all the time as the sequence keeps increasing when more and more 
jobs get submitted.
# Well we are doing interface modification, so it would be better to go for 
complete solution so that its not revisited again for deprecation.

One other approach which i can think of is : Send all the tokens during node 
registration ( This will avoid most of the corner cases) and as part of 
heartbeat send the app tokens(all) which have been renewed (which can be done 
in event based model). Further we can have the cache(pre-computed) of 
SystemCredentialsForAppsProto which are sent as part of Heart Beat so that we 
reduce memory foot print. thus this approach would solve large number of small 
jobs too without interface change. thoughts ?


was (Author: naganarasimha):
Sorry for the delay in response [~jlowe],
Thanks for the very detailed response. Agree that the delta approaches 
initially mentioned can introduce certain amount of complexity in the cases 
mentioned by you.
Though initially the approach mentioned by you was appealing and less 
complicated, i was thinking of following scenarios :
# When there are large number of small jobs in a large clsuter we almost send 
the tokens as the sequence keeps increasing when more and more jobs get 
submitted.
# Well we are doing interface modification, so it would be better to go for 
complete solution so that its not revisited again for deprecation.

One other approach which i can think of is : Send all the tokens during node 
registration ( This will avoid most of the corner cases) and as part of 
heartbeat send the app tokens(all) which have been renewed (which can be done 
in event based model). Further we can have the cache(pre-computed) of 
SystemCredentialsForAppsProto which are sent as part of Heart Beat so that we 
reduce memory foot print. thus this approach would solve large number of small 
jobs too without interface change. thoughts ?

> RM requires large memory in sending out security tokens as part of Node 
> Heartbeat in large cluster
> --
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Critical
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) RM requires large memory in sending out security tokens as part of Node Heartbeat in large cluster

2017-05-03 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994995#comment-15994995
 ] 

Naganarasimha G R commented on YARN-6523:
-

Sorry for the delay in response [~jlowe],
Thanks for the very detailed response. Agree that the delta approaches 
initially mentioned can introduce certain amount of complexity in the cases 
mentioned by you.
Though initially the approach mentioned by you was appealing and less 
complicated, i was thinking of following scenarios :
# When there are large number of small jobs in a large clsuter we almost send 
the tokens as the sequence keeps increasing when more and more jobs get 
submitted.
# Well we are doing interface modification, so it would be better to go for 
complete solution so that its not revisited again for deprecation.

One other approach which i can think of is : Send all the tokens during node 
registration ( This will avoid most of the corner cases) and as part of 
heartbeat send the app tokens(all) which have been renewed (which can be done 
in event based model). Further we can have the cache(pre-computed) of 
SystemCredentialsForAppsProto which are sent as part of Heart Beat so that we 
reduce memory foot print. thus this approach would solve large number of small 
jobs too without interface change. thoughts ?

> RM requires large memory in sending out security tokens as part of Node 
> Heartbeat in large cluster
> --
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Critical
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-2962) ZKRMStateStore: Limit the number of znodes under a znode

2017-05-03 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994963#comment-15994963
 ] 

Varun Saxena edited comment on YARN-2962 at 5/3/17 2:25 PM:


Thanks a lot [~templedf] for the review and commit. This JIRA was pending since 
a long time.

The attempt in the patch was to have seamless transition from one split index 
to another and/or from previous version to this one.
We look through alternate paths to decide where application is stored.

Thanks [~kasha] and [~asuresh] for the reviews as well.
Thanks [~rakeshr] for your suggestion and insights into the behavior of 
Zookeeper.


was (Author: varun_saxena):
Thanks a lot [~templedf] for the review and commit. This JIRA was pending since 
a long time.

The attempt in the patch was to have seamless transition from one split index 
to another and/or from previous version to this one.
We look through alternate paths to decide where application is stored.

Thanks [~kasha] and [~asuresh] for the reviews as well.

> ZKRMStateStore: Limit the number of znodes under a znode
> 
>
> Key: YARN-2962
> URL: https://issues.apache.org/jira/browse/YARN-2962
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Varun Saxena
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-2962.006.patch, YARN-2962.007.patch, 
> YARN-2962.008.patch, YARN-2962.008.patch, YARN-2962.009.patch, 
> YARN-2962.010.patch, YARN-2962.011.patch, YARN-2962.01.patch, 
> YARN-2962.04.patch, YARN-2962.05.patch, YARN-2962.2.patch, YARN-2962.3.patch
>
>
> We ran into this issue where we were hitting the default ZK server message 
> size configs, primarily because the message had too many znodes even though 
> they individually they were all small.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2962) ZKRMStateStore: Limit the number of znodes under a znode

2017-05-03 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994963#comment-15994963
 ] 

Varun Saxena commented on YARN-2962:


Thanks a lot [~templedf] for the review and commit. This JIRA was pending since 
a long time.

The attempt in the patch was to have seamless transition from one split index 
to another and/or from previous version to this one.
We look through alternate paths to decide where application is stored.

Thanks [~kasha] and [~asuresh] for the reviews as well.

> ZKRMStateStore: Limit the number of znodes under a znode
> 
>
> Key: YARN-2962
> URL: https://issues.apache.org/jira/browse/YARN-2962
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Varun Saxena
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-2962.006.patch, YARN-2962.007.patch, 
> YARN-2962.008.patch, YARN-2962.008.patch, YARN-2962.009.patch, 
> YARN-2962.010.patch, YARN-2962.011.patch, YARN-2962.01.patch, 
> YARN-2962.04.patch, YARN-2962.05.patch, YARN-2962.2.patch, YARN-2962.3.patch
>
>
> We ran into this issue where we were hitting the default ZK server message 
> size configs, primarily because the message had too many znodes even though 
> they individually they were all small.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6544) Add Null check RegistryDNS service while parsing registry records

2017-05-03 Thread Karam Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karam Singh updated YARN-6544:
--
Attachment: YARN-6544-yarn-native-services.002.patch

second patch 
Add else part for if statement of null check
tried to address checkstyle warnings also

> Add Null check RegistryDNS service while parsing registry records
> -
>
> Key: YARN-6544
> URL: https://issues.apache.org/jira/browse/YARN-6544
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: yarn-native-services
>Reporter: Karam Singh
>Assignee: Karam Singh
> Fix For: yarn-native-services
>
> Attachments: YARN-6544-yarn-native-services.001.patch, 
> YARN-6544-yarn-native-services.002.patch
>
>
> Add Null check RegistryDNS service while parsing registry records for Yarn 
> persistance attribute. 
> As of now It assumes that  yarn registry record always contain yarn 
> persistance which is not the case



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-3269) Yarn.nodemanager.remote-app-log-dir could not be configured to fully qualified path

2017-05-03 Thread stefanlee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994501#comment-15994501
 ] 

stefanlee edited comment on YARN-3269 at 5/3/17 8:59 AM:
-

thanks for this jira, but  when we do not config 
*yarn.nodemanager.remote-app-log-dir*, it will be */tmp/logs* and NM   or  mr 
historyserver  will check */tmp/logs* 's  scheme  when write log to HDFS or 
read log from HDFS,then they will throw exception of {{No AbstractFileSystem 
for scheme: null}}  in class *AbstractFileSystem.java* ,so i suggest that when  
*yarn.nodemanager.remote-app-log-dir*  is default value(scheme is null ), we 
still use *FileContext.getFileContext(conf)*  in *AggregatedLogFormat.java*, 
else we can use *FileContext.getFileContext(remoteAppLogFile.toUri(), conf)* , 
am i wrong with this question? my hadoop version is 2.4.0  [~xgong]  [~zhz]


was (Author: imstefanlee):
thanks for this jira, but  when we do not config 
*yarn.nodemanager.remote-app-log-dir*, it will be */tmp/logs* and NM   or  mr 
historyserver  will check */tmp/logs* 's  scheme  when write log to HDFS or 
read log from HDFS,then they will throw exception of {{No AbstractFileSystem 
for scheme: null}}  in class *AbstractFileSystem.java* ,so i suggest that when  
*yarn.nodemanager.remote-app-log-dir*  is default value(scheme is null ), we 
still use *FileContext.getFileContext(conf)*  in *AggregatedLogFormat.java*, 
else we can use *FileContext.getFileContext(remoteAppLogFile.toUri(), conf)* , 
am i wrong with this question?  [~xgong]  [~zhz]

> Yarn.nodemanager.remote-app-log-dir could not be configured to fully 
> qualified path
> ---
>
> Key: YARN-3269
> URL: https://issues.apache.org/jira/browse/YARN-3269
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: YARN-3269.1.patch, YARN-3269.2.patch
>
>
> Log aggregation currently is always relative to the default file system, not 
> an arbitrary file system identified by URI. So we can't put an arbitrary 
> fully-qualified URI into yarn.nodemanager.remote-app-log-dir.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3269) Yarn.nodemanager.remote-app-log-dir could not be configured to fully qualified path

2017-05-03 Thread stefanlee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994501#comment-15994501
 ] 

stefanlee commented on YARN-3269:
-

thanks for this jira, but  when we do not config 
*yarn.nodemanager.remote-app-log-dir*, it will be */tmp/logs* and NM   or  mr 
historyserver  will check */tmp/logs* 's  scheme  when write log to HDFS or 
read log from HDFS,then they will throw exception of {{No AbstractFileSystem 
for scheme: null}}  in class *AbstractFileSystem.java* ,so i suggest that when  
*yarn.nodemanager.remote-app-log-dir*  is default value(scheme is null ), we 
still use *FileContext.getFileContext(conf)*  in *AggregatedLogFormat.java*, 
else we can use *FileContext.getFileContext(remoteAppLogFile.toUri(), conf)* , 
am i wrong with this question?  [~xgong]  [~zhz]

> Yarn.nodemanager.remote-app-log-dir could not be configured to fully 
> qualified path
> ---
>
> Key: YARN-3269
> URL: https://issues.apache.org/jira/browse/YARN-3269
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: YARN-3269.1.patch, YARN-3269.2.patch
>
>
> Log aggregation currently is always relative to the default file system, not 
> an arbitrary file system identified by URI. So we can't put an arbitrary 
> fully-qualified URI into yarn.nodemanager.remote-app-log-dir.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6398) Implement a new native-service UI

2017-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994381#comment-15994381
 ] 

Hadoop QA commented on YARN-6398:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  1m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6398 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865937/YARN-6398-yarn-native-services.001.patch
 |
| Optional Tests |  asflicense  |
| uname | Linux 0d6ce067915e 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / e238402 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15808/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Implement a new native-service UI
> -
>
> Key: YARN-6398
> URL: https://issues.apache.org/jira/browse/YARN-6398
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Sunil G
>Assignee: Akhil PB
> Attachments: YARN-6398.001.patch, YARN-6398.002.patch, 
> YARN-6398.003.patch, YARN-6398-yarn-native-services.001.patch
>
>
> Create a new and advanced native service UI which can co-exist with the new 
> Yarn UI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6419) Support to launch new native-service from new YARN UI

2017-05-03 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6419:
--
Summary: Support to launch new native-service from new YARN UI  (was: 
Support to launch native-service deployment from new YARN UI)

> Support to launch new native-service from new YARN UI
> -
>
> Key: YARN-6419
> URL: https://issues.apache.org/jira/browse/YARN-6419
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: Screenshot-deploy-new-service-form-input.png, 
> Screenshot-deploy-new-service-json-input.png, 
> Screenshot-deploy-service-add-component-form-input.png, YARN-6419.001.patch, 
> YARN-6419.002.patch, YARN-6419.003.patch, YARN-6419.004.patch, 
> YARN-6419-yarn-native-services.001.patch, 
> YARN-6419-yarn-native-services.002.patch, 
> YARN-6419-yarn-native-services.003.patch, 
> YARN-6419-yarn-native-services.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org