[jira] [Commented] (YARN-3884) RMContainerImpl transition from RESERVED to KILL apphistory status not updated

2017-01-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832825#comment-15832825
 ] 

Hadoop QA commented on YARN-3884:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 270 unchanged - 1 fixed = 271 total (was 271) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 21s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing |
|   | hadoop.yarn.server.resourcemanager.TestRMAdminService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-3884 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12848688/YARN-3884.0007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9b94dcdd1239 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ccf2d66 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14727/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14727/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14727/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-ser

[jira] [Updated] (YARN-3884) RMContainerImpl transition from RESERVED to KILL apphistory status not updated

2017-01-20 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-3884:
---
Attachment: (was: YARN-3384.0007.patch)

> RMContainerImpl transition from RESERVED to KILL apphistory status not updated
> --
>
> Key: YARN-3884
> URL: https://issues.apache.org/jira/browse/YARN-3884
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
> Environment: Suse11 Sp3
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-easy
> Attachments: 0001-YARN-3884.patch, Apphistory Container Status.jpg, 
> Elapsed Time.jpg, Test Result-Container status.jpg, YARN-3884.0002.patch, 
> YARN-3884.0003.patch, YARN-3884.0004.patch, YARN-3884.0005.patch, 
> YARN-3884.0006.patch, YARN-3884.0007.patch
>
>
> Setup
> ===
> 1 NM 3072 16 cores each
> Steps to reproduce
> ===
> 1.Submit apps  to Queue 1 with 512 mb 1 core
> 2.Submit apps  to Queue 2 with 512 mb and 5 core
> lots of containers get reserved and unreserved in this case 
> {code}
> 2015-07-02 20:45:31,169 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0002_01_13 Container Transitioned from NEW to 
> RESERVED
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> Reserved container  application=application_1435849994778_0002 
> resource= queue=QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> usedCapacity=1.6410257, absoluteUsedCapacity=0.65625, numApps=1, 
> numContainers=5 usedCapacity=1.6410257 absoluteUsedCapacity=0.65625 
> used= cluster=
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> Re-sorting assigned queue: root.QueueA stats: QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> usedCapacity=2.0317461, absoluteUsedCapacity=0.8125, numApps=1, 
> numContainers=6
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> assignedContainer queue=root usedCapacity=0.96875 
> absoluteUsedCapacity=0.96875 used= 
> cluster=
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0001_01_14 Container Transitioned from NEW to 
> ALLOCATED
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=dsperf   
> OPERATION=AM Allocated ContainerTARGET=SchedulerApp 
> RESULT=SUCCESS  APPID=application_1435849994778_0001
> CONTAINERID=container_e24_1435849994778_0001_01_14
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: 
> Assigned container container_e24_1435849994778_0001_01_14 of capacity 
>  on host host-10-19-92-117:64318, which has 6 
> containers,  used and  available 
> after allocation
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> assignedContainer application attempt=appattempt_1435849994778_0001_01 
> container=Container: [ContainerId: 
> container_e24_1435849994778_0001_01_14, NodeId: host-10-19-92-117:64318, 
> NodeHttpAddress: host-10-19-92-117:65321, Resource: , 
> Priority: 20, Token: null, ] queue=default: capacity=0.2, 
> absoluteCapacity=0.2, usedResources=, 
> usedCapacity=2.0846906, absoluteUsedCapacity=0.4166, numApps=1, 
> numContainers=5 clusterResource=
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> Re-sorting assigned queue: root.default stats: default: capacity=0.2, 
> absoluteCapacity=0.2, usedResources=, 
> usedCapacity=2.5016286, absoluteUsedCapacity=0.5, numApps=1, numContainers=6
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> assignedContainer queue=root usedCapacity=1.0 absoluteUsedCapacity=1.0 
> used= cluster=
> 2015-07-02 20:45:32,143 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0001_01_14 Container Transitioned from 
> ALLOCATED to ACQUIRED
> 2015-07-02 20:45:32,174 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
>  Trying to fulfill reservation for application application_1435849994778_0002 
> on node: host-10-19-92-143:64318
> 2015-07-02 20:45:32,174 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> Reserved container  application=application_1435849994778_0002 
> resource= queue=QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedReso

[jira] [Updated] (YARN-3884) RMContainerImpl transition from RESERVED to KILL apphistory status not updated

2017-01-20 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-3884:
---
Attachment: YARN-3884.0007.patch

> RMContainerImpl transition from RESERVED to KILL apphistory status not updated
> --
>
> Key: YARN-3884
> URL: https://issues.apache.org/jira/browse/YARN-3884
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
> Environment: Suse11 Sp3
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-easy
> Attachments: 0001-YARN-3884.patch, Apphistory Container Status.jpg, 
> Elapsed Time.jpg, Test Result-Container status.jpg, YARN-3884.0002.patch, 
> YARN-3884.0003.patch, YARN-3884.0004.patch, YARN-3884.0005.patch, 
> YARN-3884.0006.patch, YARN-3884.0007.patch
>
>
> Setup
> ===
> 1 NM 3072 16 cores each
> Steps to reproduce
> ===
> 1.Submit apps  to Queue 1 with 512 mb 1 core
> 2.Submit apps  to Queue 2 with 512 mb and 5 core
> lots of containers get reserved and unreserved in this case 
> {code}
> 2015-07-02 20:45:31,169 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0002_01_13 Container Transitioned from NEW to 
> RESERVED
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> Reserved container  application=application_1435849994778_0002 
> resource= queue=QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> usedCapacity=1.6410257, absoluteUsedCapacity=0.65625, numApps=1, 
> numContainers=5 usedCapacity=1.6410257 absoluteUsedCapacity=0.65625 
> used= cluster=
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> Re-sorting assigned queue: root.QueueA stats: QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> usedCapacity=2.0317461, absoluteUsedCapacity=0.8125, numApps=1, 
> numContainers=6
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> assignedContainer queue=root usedCapacity=0.96875 
> absoluteUsedCapacity=0.96875 used= 
> cluster=
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0001_01_14 Container Transitioned from NEW to 
> ALLOCATED
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=dsperf   
> OPERATION=AM Allocated ContainerTARGET=SchedulerApp 
> RESULT=SUCCESS  APPID=application_1435849994778_0001
> CONTAINERID=container_e24_1435849994778_0001_01_14
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: 
> Assigned container container_e24_1435849994778_0001_01_14 of capacity 
>  on host host-10-19-92-117:64318, which has 6 
> containers,  used and  available 
> after allocation
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> assignedContainer application attempt=appattempt_1435849994778_0001_01 
> container=Container: [ContainerId: 
> container_e24_1435849994778_0001_01_14, NodeId: host-10-19-92-117:64318, 
> NodeHttpAddress: host-10-19-92-117:65321, Resource: , 
> Priority: 20, Token: null, ] queue=default: capacity=0.2, 
> absoluteCapacity=0.2, usedResources=, 
> usedCapacity=2.0846906, absoluteUsedCapacity=0.4166, numApps=1, 
> numContainers=5 clusterResource=
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> Re-sorting assigned queue: root.default stats: default: capacity=0.2, 
> absoluteCapacity=0.2, usedResources=, 
> usedCapacity=2.5016286, absoluteUsedCapacity=0.5, numApps=1, numContainers=6
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> assignedContainer queue=root usedCapacity=1.0 absoluteUsedCapacity=1.0 
> used= cluster=
> 2015-07-02 20:45:32,143 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0001_01_14 Container Transitioned from 
> ALLOCATED to ACQUIRED
> 2015-07-02 20:45:32,174 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
>  Trying to fulfill reservation for application application_1435849994778_0002 
> on node: host-10-19-92-143:64318
> 2015-07-02 20:45:32,174 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> Reserved container  application=application_1435849994778_0002 
> resource= queue=QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> 

[jira] [Updated] (YARN-3884) RMContainerImpl transition from RESERVED to KILL apphistory status not updated

2017-01-20 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-3884:
---
Attachment: YARN-3384.0007.patch

[~varun_saxena]
Thank you for review. Attaching patch after checkstyle fix

> RMContainerImpl transition from RESERVED to KILL apphistory status not updated
> --
>
> Key: YARN-3884
> URL: https://issues.apache.org/jira/browse/YARN-3884
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
> Environment: Suse11 Sp3
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-easy
> Attachments: 0001-YARN-3884.patch, Apphistory Container Status.jpg, 
> Elapsed Time.jpg, Test Result-Container status.jpg, YARN-3384.0007.patch, 
> YARN-3884.0002.patch, YARN-3884.0003.patch, YARN-3884.0004.patch, 
> YARN-3884.0005.patch, YARN-3884.0006.patch
>
>
> Setup
> ===
> 1 NM 3072 16 cores each
> Steps to reproduce
> ===
> 1.Submit apps  to Queue 1 with 512 mb 1 core
> 2.Submit apps  to Queue 2 with 512 mb and 5 core
> lots of containers get reserved and unreserved in this case 
> {code}
> 2015-07-02 20:45:31,169 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0002_01_13 Container Transitioned from NEW to 
> RESERVED
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> Reserved container  application=application_1435849994778_0002 
> resource= queue=QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> usedCapacity=1.6410257, absoluteUsedCapacity=0.65625, numApps=1, 
> numContainers=5 usedCapacity=1.6410257 absoluteUsedCapacity=0.65625 
> used= cluster=
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> Re-sorting assigned queue: root.QueueA stats: QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> usedCapacity=2.0317461, absoluteUsedCapacity=0.8125, numApps=1, 
> numContainers=6
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> assignedContainer queue=root usedCapacity=0.96875 
> absoluteUsedCapacity=0.96875 used= 
> cluster=
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0001_01_14 Container Transitioned from NEW to 
> ALLOCATED
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=dsperf   
> OPERATION=AM Allocated ContainerTARGET=SchedulerApp 
> RESULT=SUCCESS  APPID=application_1435849994778_0001
> CONTAINERID=container_e24_1435849994778_0001_01_14
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: 
> Assigned container container_e24_1435849994778_0001_01_14 of capacity 
>  on host host-10-19-92-117:64318, which has 6 
> containers,  used and  available 
> after allocation
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> assignedContainer application attempt=appattempt_1435849994778_0001_01 
> container=Container: [ContainerId: 
> container_e24_1435849994778_0001_01_14, NodeId: host-10-19-92-117:64318, 
> NodeHttpAddress: host-10-19-92-117:65321, Resource: , 
> Priority: 20, Token: null, ] queue=default: capacity=0.2, 
> absoluteCapacity=0.2, usedResources=, 
> usedCapacity=2.0846906, absoluteUsedCapacity=0.4166, numApps=1, 
> numContainers=5 clusterResource=
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> Re-sorting assigned queue: root.default stats: default: capacity=0.2, 
> absoluteCapacity=0.2, usedResources=, 
> usedCapacity=2.5016286, absoluteUsedCapacity=0.5, numApps=1, numContainers=6
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> assignedContainer queue=root usedCapacity=1.0 absoluteUsedCapacity=1.0 
> used= cluster=
> 2015-07-02 20:45:32,143 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0001_01_14 Container Transitioned from 
> ALLOCATED to ACQUIRED
> 2015-07-02 20:45:32,174 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
>  Trying to fulfill reservation for application application_1435849994778_0002 
> on node: host-10-19-92-143:64318
> 2015-07-02 20:45:32,174 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> Reserved container  application=application_1435849994778_0002 
> resour

[jira] [Commented] (YARN-6099) Improve webservice to list aggregated log files

2017-01-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832787#comment-15832787
 ] 

Hadoop QA commented on YARN-6099:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 6 new + 54 unchanged - 6 fixed = 60 total (was 60) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
43s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
47s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 12s{color} 
| {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m  
3s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.timeline.webapp.TestTimelineWebServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6099 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12848677/YARN-6099.trunk.3.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 403ffc26f94c 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop

[jira] [Commented] (YARN-5910) Support for multi-cluster delegation tokens

2017-01-20 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832738#comment-15832738
 ] 

Jian He commented on YARN-5910:
---

testFinishedAppRemovalAfterRMRestart passed locally for me..

> Support for multi-cluster delegation tokens
> ---
>
> Key: YARN-5910
> URL: https://issues.apache.org/jira/browse/YARN-5910
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: security
>Reporter: Clay B.
>Assignee: Jian He
>Priority: Minor
> Attachments: YARN-5910.01.patch, YARN-5910.2.patch, 
> YARN-5910.3.patch, YARN-5910.4.patch, YARN-5910.5.patch, YARN-5910.6.patch, 
> YARN-5910.7.patch
>
>
> As an administrator running many secure (kerberized) clusters, some which 
> have peer clusters managed by other teams, I am looking for a way to run jobs 
> which may require services running on other clusters. Particular cases where 
> this rears itself are running something as core as a distcp between two 
> kerberized clusters (e.g. {{hadoop --config /home/user292/conf/ distcp 
> hdfs://LOCALCLUSTER/user/user292/test.out 
> hdfs://REMOTECLUSTER/user/user292/test.out.result}}).
> Thanks to YARN-3021, once can run for a while but if the delegation token for 
> the remote cluster needs renewal the job will fail[1]. One can pre-configure 
> their {{hdfs-site.xml}} loaded by the YARN RM to know of all possible HDFSes 
> available but that requires coordination that is not always feasible, 
> especially as a cluster's peers grow into the tens of clusters or across 
> management teams. Ideally, one could have core systems configured this way 
> but jobs could also specify their own handling of tokens and management when 
> needed?
> [1]: Example stack trace when the RM is unaware of a remote service:
> 
> {code}
> 2016-03-23 14:59:50,528 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  application_1458441356031_3317 found existing hdfs token Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:REMOTECLUSTER, Ident: 
> (HDFS_DELEGATION_TOKEN token
>  10927 for user292)
> 2016-03-23 14:59:50,557 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Unable to add the application to the delegation token renewer.
> java.io.IOException: Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, 
> Service: ha-hdfs:REMOTECLUSTER, Ident: (HDFS_DELEGATION_TOKEN token 10927 for 
> user292)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:427)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$700(DelegationTokenRenewer.java:78)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:781)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:762)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> Caused by: java.io.IOException: Unable to map logical nameservice URI 
> 'hdfs://REMOTECLUSTER' to a NameNode. Local configuration does not have a 
> failover proxy provider configured.
> at org.apache.hadoop.hdfs.DFSClient$Renewer.getNNProxy(DFSClient.java:1164)
> at org.apache.hadoop.hdfs.DFSClient$Renewer.renew(DFSClient.java:1128)
> at org.apache.hadoop.security.token.Token.renew(Token.java:377)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:516)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.renewToken(DelegationTokenRenewer.java:511)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:425)
> ... 6 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6113) re-direct NM Web Service to get container logs for finished applications

2017-01-20 Thread Xuan Gong (JIRA)
Xuan Gong created YARN-6113:
---

 Summary: re-direct NM Web Service to get container logs for 
finished applications
 Key: YARN-6113
 URL: https://issues.apache.org/jira/browse/YARN-6113
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Xuan Gong
Assignee: Xuan Gong


In NM web ui, when we try to get container logs for finished application, it 
would redirect to the log server based on the configuration: 
yarn.log.server.url. We should do the similar thing for NM WebService



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6099) Improve webservice to list aggregated log files

2017-01-20 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832716#comment-15832716
 ] 

Xuan Gong commented on YARN-6099:
-

Thanks for the review. [~djp]

Uploaded new patches to address your comments

> Improve webservice to list aggregated log files
> ---
>
> Key: YARN-6099
> URL: https://issues.apache.org/jira/browse/YARN-6099
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6099.1.patch, YARN-6099.branch-2.v2.patch, 
> YARN-6099.branch-2.v3.patch, YARN-6099.trunk.1.patch, 
> YARN-6099.trunk.2.patch, YARN-6099.trunk.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6099) Improve webservice to list aggregated log files

2017-01-20 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6099:

Attachment: YARN-6099.branch-2.v3.patch

> Improve webservice to list aggregated log files
> ---
>
> Key: YARN-6099
> URL: https://issues.apache.org/jira/browse/YARN-6099
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6099.1.patch, YARN-6099.branch-2.v2.patch, 
> YARN-6099.branch-2.v3.patch, YARN-6099.trunk.1.patch, 
> YARN-6099.trunk.2.patch, YARN-6099.trunk.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6099) Improve webservice to list aggregated log files

2017-01-20 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6099:

Attachment: YARN-6099.trunk.3.patch

> Improve webservice to list aggregated log files
> ---
>
> Key: YARN-6099
> URL: https://issues.apache.org/jira/browse/YARN-6099
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6099.1.patch, YARN-6099.branch-2.v2.patch, 
> YARN-6099.branch-2.v3.patch, YARN-6099.trunk.1.patch, 
> YARN-6099.trunk.2.patch, YARN-6099.trunk.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5910) Support for multi-cluster delegation tokens

2017-01-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832713#comment-15832713
 ] 

Hadoop QA commented on YARN-5910:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  2s{color} | {color:orange} root: The patch generated 29 new + 1445 
unchanged - 10 fixed = 1474 total (was 1455) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 908 unchanged - 5 fixed = 908 total (was 913) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
41s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {co

[jira] [Commented] (YARN-3637) Handle localization sym-linking correctly at the YARN level

2017-01-20 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832689#comment-15832689
 ] 

Sangjin Lee commented on YARN-3637:
---

Thanks [~ctrezzo] for your patch!

It looks good for the most part. One small suggestion would be to handle the 
case where the resource name was the same as the file name returned by the 
shared cache manager. I suspect that would be > 90% of the cases. In that case, 
we may want to add the redundant name twice. And this may matter if we're 
dealing with hundreds or thousands of localizable resources.

> Handle localization sym-linking correctly at the YARN level
> ---
>
> Key: YARN-3637
> URL: https://issues.apache.org/jira/browse/YARN-3637
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-3637-trunk.001.patch, YARN-3637-trunk.002.patch
>
>
> The shared cache needs to handle resource sym-linking at the YARN layer. 
> Currently, we let the application layer (i.e. mapreduce) handle this, but it 
> is probably better for all applications if it is handled transparently.
> Here is the scenario:
> Imagine two separate jars (with unique checksums) that have the same name 
> job.jar.
> They are stored in the shared cache as two separate resources:
> checksum1/job.jar
> checksum2/job.jar
> A new application tries to use both of these resources, but internally refers 
> to them as different names:
> foo.jar maps to checksum1
> bar.jar maps to checksum2
> When the shared cache returns the path to the resources, both resources are 
> named the same (i.e. job.jar). Because of this, when the resources are 
> localized one of them clobbers the other. This is because both symlinks in 
> the container_id directory are the same name (i.e. job.jar) even though they 
> point to two separate resource directories.
> Originally we tackled this in the MapReduce client by using the fragment 
> portion of the resource url. This, however, seems like something that should 
> be solved at the YARN layer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5734) OrgQueue for easy CapacityScheduler queue configuration management

2017-01-20 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-5734:

Attachment: YARN-5734-YARN-5734.001.patch

> OrgQueue for easy CapacityScheduler queue configuration management
> --
>
> Key: YARN-5734
> URL: https://issues.apache.org/jira/browse/YARN-5734
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Min Shen
>Assignee: Min Shen
> Attachments: OrgQueue_API-Based_Config_Management_v1.pdf, 
> OrgQueueAPI-BasedSchedulerConfigurationManagement_v2.pdf, 
> OrgQueue_Design_v0.pdf, YARN-5734-YARN-5734.001.patch
>
>
> The current xml based configuration mechanism in CapacityScheduler makes it 
> very inconvenient to apply any changes to the queue configurations. We saw 2 
> main drawbacks in the file based configuration mechanism:
> # This makes it very inconvenient to automate queue configuration updates. 
> For example, in our cluster setup, we leverage the queue mapping feature from 
> YARN-2411 to route users to their dedicated organization queues. It could be 
> extremely cumbersome to keep updating the config file to manage the very 
> dynamic mapping between users to organizations.
> # Even a user has the admin permission on one specific queue, that user is 
> unable to make any queue configuration changes to resize the subqueues, 
> changing queue ACLs, or creating new queues. All these operations need to be 
> performed in a centralized manner by the cluster administrators.
> With these current limitations, we realized the need of a more flexible 
> configuration mechanism that allows queue configurations to be stored and 
> managed more dynamically. We developed the feature internally at LinkedIn 
> which introduces the concept of MutableConfigurationProvider. What it 
> essentially does is to provide a set of configuration mutation APIs that 
> allows queue configurations to be updated externally with a set of REST APIs. 
> When performing the queue configuration changes, the queue ACLs will be 
> honored, which means only queue administrators can make configuration changes 
> to a given queue. MutableConfigurationProvider is implemented as a pluggable 
> interface, and we have one implementation of this interface which is based on 
> Derby embedded database.
> This feature has been deployed at LinkedIn's Hadoop cluster for a year now, 
> and have gone through several iterations of gathering feedbacks from users 
> and improving accordingly. With this feature, cluster administrators are able 
> to automate lots of thequeue configuration management tasks, such as setting 
> the queue capacities to adjust cluster resources between queues based on 
> established resource consumption patterns, or managing updating the user to 
> queue mappings. We have attached our design documentation with this ticket 
> and would like to receive feedbacks from the community regarding how to best 
> integrate it with the latest version of YARN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5734) OrgQueue for easy CapacityScheduler queue configuration management

2017-01-20 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832692#comment-15832692
 ] 

Jonathan Hung commented on YARN-5734:
-

Uploaded an initial patch containing some basic end-to-end functionality.
Here are yarn-site.xml configurations to get this working:
* {{yarn.scheduler.capacity.config.path}} should be set to a directory inside 
which the database will be stored. (resource manager user should be able to 
create subdirectories in here)
* {{yarn.scheduler.mutable-queue-config.enabled}} should be {{true}}
* {{yarn.resourcemanager.configuration.provider-class}} should be set to 
{{org.apache.hadoop.yarn.server.resourcemanager.conf.MutableConfigurationManager}}

Here's some working examples which can be run in series, assuming a starting 
configuration of two queues, {{root.default}} (with 100 capacity) and 
{{root.test}} (with 0 capacity):
{noformat}curl -X PUT -H 'Content-Type: application/xml' -d '
  
root.test

  
state
STOPPED
  
  
maximum-applications
33
  

  
' --negotiate -u : 
"http://:8088/ws/v1/cluster/conf/scheduler/mutate"{noformat}
Sets the {{root.test}} queue's state to STOPPED and its maximum-applications to 
33.

{noformat}curl -X PUT -H 'Content-Type: application/xml' -d '
  
root.test
  
' --negotiate -u : 
"http://:8088/ws/v1/cluster/conf/scheduler/mutate"{noformat}
Removes the {{root.test}} queue (since it is STOPPED, leveraging YARN-5556)

{noformat}curl -X PUT -H 'Content-Type: application/xml' -d '
  
root.test2

  
maximum-applications
34
  

  
' --negotiate -u : 
"http://:8088/ws/v1/cluster/conf/scheduler/mutate"{noformat}
Adds a {{root.test2}} queue. Also sets its maximum-applications to 34.

This is just a first version, so there are some details that are not yet 
implemented/tested (e.g. specifying a hierarchical conf update). [~xgong] and 
[~wangda], do you mind taking a look to make sure our ideas/interfaces are in 
alignment?

> OrgQueue for easy CapacityScheduler queue configuration management
> --
>
> Key: YARN-5734
> URL: https://issues.apache.org/jira/browse/YARN-5734
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Min Shen
>Assignee: Min Shen
> Attachments: OrgQueue_API-Based_Config_Management_v1.pdf, 
> OrgQueueAPI-BasedSchedulerConfigurationManagement_v2.pdf, 
> OrgQueue_Design_v0.pdf, YARN-5734-YARN-5734.001.patch
>
>
> The current xml based configuration mechanism in CapacityScheduler makes it 
> very inconvenient to apply any changes to the queue configurations. We saw 2 
> main drawbacks in the file based configuration mechanism:
> # This makes it very inconvenient to automate queue configuration updates. 
> For example, in our cluster setup, we leverage the queue mapping feature from 
> YARN-2411 to route users to their dedicated organization queues. It could be 
> extremely cumbersome to keep updating the config file to manage the very 
> dynamic mapping between users to organizations.
> # Even a user has the admin permission on one specific queue, that user is 
> unable to make any queue configuration changes to resize the subqueues, 
> changing queue ACLs, or creating new queues. All these operations need to be 
> performed in a centralized manner by the cluster administrators.
> With these current limitations, we realized the need of a more flexible 
> configuration mechanism that allows queue configurations to be stored and 
> managed more dynamically. We developed the feature internally at LinkedIn 
> which introduces the concept of MutableConfigurationProvider. What it 
> essentially does is to provide a set of configuration mutation APIs that 
> allows queue configurations to be updated externally with a set of REST APIs. 
> When performing the queue configuration changes, the queue ACLs will be 
> honored, which means only queue administrators can make configuration changes 
> to a given queue. MutableConfigurationProvider is implemented as a pluggable 
> interface, and we have one implementation of this interface which is based on 
> Derby embedded database.
> This feature has been deployed at LinkedIn's Hadoop cluster for a year now, 
> and have gone through several iterations of gathering feedbacks from users 
> and improving accordingly. With this feature, cluster administrators are able 
> to automate lots of thequeue configuration management tasks, such as setting 
> the queue capacities to adjust cluster resources between queues based on 
> established resource consumption patterns, or managing updating the user to 
> queue mappings. We have attached our design documentation with this ticket 
> and would like to receive feedbacks from the community regarding how t

[jira] [Commented] (YARN-5972) Support Pausing/Freezing of opportunistic containers

2017-01-20 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832683#comment-15832683
 ] 

Konstantinos Karanasos commented on YARN-5972:
--

bq. My opinion is that PAUSED state should not be handled any differently from 
the current QUEUED state we already persist in the store, this implies 
YARN-6059 can probably be closed (We do need to fix the ContainerScheduler to 
populate it with the running containers though, but this is orthogonal to the 
paused/resume feature and should be handled as a separate JIRA).
Regarding this, I just had an offline chat with [~asuresh]. It seems that it 
would be better to add a separate entry at the StateStore for the PAUSED 
containers. This way we can decide what to do with them at node recovery. For 
example, we might want to kill them to release node resources that they might 
be holding.

I will give a look at YARN-5292 and YARN-5216 next week.

> Support Pausing/Freezing of opportunistic containers
> 
>
> Key: YARN-5972
> URL: https://issues.apache.org/jira/browse/YARN-5972
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Hitesh Sharma
>Assignee: Hitesh Sharma
> Attachments: container-pause-resume.pdf
>
>
> YARN-2877 introduced OPPORTUNISTIC containers, and YARN-5216 proposes to add 
> capability to customize how OPPORTUNISTIC containers get preempted.
> In this JIRA we propose introducing a PAUSED container state.
> Instead of preempting a running container, the container can be moved to a 
> PAUSED state, where it remains until resources get freed up on the node then 
> the preempted container can resume to the running state.
> Note that process freezing this is already supported by 'cgroups freezer' 
> which is used internally by the docker pause functionality. Windows also has 
> OS level support of a similar nature.
> One scenario where this capability is useful is work preservation. How 
> preemption is done, and whether the container supports it, is implementation 
> specific.
> For instance, if the container is a virtual machine, then preempt call would 
> pause the VM and resume would restore it back to the running state.
> If the container executor / runtime doesn't support preemption, then preempt 
> would default to killing the container. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-3637) Handle localization sym-linking correctly at the YARN level

2017-01-20 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832580#comment-15832580
 ] 

Chris Trezzo edited comment on YARN-3637 at 1/21/17 12:08 AM:
--

-It might be better to overload the use() method instead of replacing it.-

-[~templedf] Thinking about your previous comment some more, I may have missed 
your point the first time. I now realize that the overridden use method can 
simply honor the fragment portion of the url. If there is no fragment, then we 
can just use the original path's name as a new fragment to preserve the 
resource name. This can provide the same functionality without the extra 
parameter. I will fix the patch and post a new version. Let me know if you had 
something different in mind. Thanks again!-

I spoke too soon again... I am back to my original thought 
[above|https://issues.apache.org/jira/browse/YARN-3637?focusedCommentId=15829214&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15829214].
 The use method takes a checksum and an appId, so we still need some way of 
passing in the suggested name. I will leave the patch as is for now.


was (Author: ctrezzo):
-It might be better to overload the use() method instead of replacing it.-

-[~templedf] Thinking about your previous comment some more, I may have missed 
your point the first time. I now realize that the overridden use method can 
simply honor the fragment portion of the url. If there is no fragment, then we 
can just use the original path's name as a new fragment to preserve the 
resource name. This can provide the same functionality without the extra 
parameter. I will fix the patch and post a new version. Let me know if you had 
something different in mind. Thanks again!-

I spoke too soon again... I am back to my original thought above. The use 
method takes a checksum and an appId, so we still need some way of passing in 
the suggested name. I will leave the patch as is for now.

> Handle localization sym-linking correctly at the YARN level
> ---
>
> Key: YARN-3637
> URL: https://issues.apache.org/jira/browse/YARN-3637
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-3637-trunk.001.patch, YARN-3637-trunk.002.patch
>
>
> The shared cache needs to handle resource sym-linking at the YARN layer. 
> Currently, we let the application layer (i.e. mapreduce) handle this, but it 
> is probably better for all applications if it is handled transparently.
> Here is the scenario:
> Imagine two separate jars (with unique checksums) that have the same name 
> job.jar.
> They are stored in the shared cache as two separate resources:
> checksum1/job.jar
> checksum2/job.jar
> A new application tries to use both of these resources, but internally refers 
> to them as different names:
> foo.jar maps to checksum1
> bar.jar maps to checksum2
> When the shared cache returns the path to the resources, both resources are 
> named the same (i.e. job.jar). Because of this, when the resources are 
> localized one of them clobbers the other. This is because both symlinks in 
> the container_id directory are the same name (i.e. job.jar) even though they 
> point to two separate resource directories.
> Originally we tackled this in the MapReduce client by using the fragment 
> portion of the resource url. This, however, seems like something that should 
> be solved at the YARN layer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-3637) Handle localization sym-linking correctly at the YARN level

2017-01-20 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832580#comment-15832580
 ] 

Chris Trezzo edited comment on YARN-3637 at 1/21/17 12:07 AM:
--

-It might be better to overload the use() method instead of replacing it.-

-[~templedf] Thinking about your previous comment some more, I may have missed 
your point the first time. I now realize that the overridden use method can 
simply honor the fragment portion of the url. If there is no fragment, then we 
can just use the original path's name as a new fragment to preserve the 
resource name. This can provide the same functionality without the extra 
parameter. I will fix the patch and post a new version. Let me know if you had 
something different in mind. Thanks again!-

I spoke too soon again... I am back to my original thought above. The use 
method takes a checksum and an appId, so we still need some way of passing in 
the suggested name. I will leave the patch as is for now.


was (Author: ctrezzo):
bq. It might be better to overload the use() method instead of replacing it.

[~templedf] Thinking about your previous comment some more, I may have missed 
your point the first time. I now realize that the overridden use method can 
simply honor the fragment portion of the url. If there is no fragment, then we 
can just use the original path's name as a new fragment to preserve the 
resource name. This can provide the same functionality without the extra 
parameter. I will fix the patch and post a new version. Let me know if you had 
something different in mind. Thanks again!

> Handle localization sym-linking correctly at the YARN level
> ---
>
> Key: YARN-3637
> URL: https://issues.apache.org/jira/browse/YARN-3637
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-3637-trunk.001.patch, YARN-3637-trunk.002.patch
>
>
> The shared cache needs to handle resource sym-linking at the YARN layer. 
> Currently, we let the application layer (i.e. mapreduce) handle this, but it 
> is probably better for all applications if it is handled transparently.
> Here is the scenario:
> Imagine two separate jars (with unique checksums) that have the same name 
> job.jar.
> They are stored in the shared cache as two separate resources:
> checksum1/job.jar
> checksum2/job.jar
> A new application tries to use both of these resources, but internally refers 
> to them as different names:
> foo.jar maps to checksum1
> bar.jar maps to checksum2
> When the shared cache returns the path to the resources, both resources are 
> named the same (i.e. job.jar). Because of this, when the resources are 
> localized one of them clobbers the other. This is because both symlinks in 
> the container_id directory are the same name (i.e. job.jar) even though they 
> point to two separate resource directories.
> Originally we tackled this in the MapReduce client by using the fragment 
> portion of the resource url. This, however, seems like something that should 
> be solved at the YARN layer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6106) Document FairScheduler 'allowPreemptionFrom' queue property

2017-01-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832585#comment-15832585
 ] 

Hudson commented on YARN-6106:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11155 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11155/])
YARN-6106.  Document FairScheduler 'allowPreemptionFrom' queue property. 
(rchiang: rev 9bab85cadfdfa0d9071303452f72c0a5d008480a)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md


> Document FairScheduler 'allowPreemptionFrom' queue property
> ---
>
> Key: YARN-6106
> URL: https://issues.apache.org/jira/browse/YARN-6106
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6106.001.patch, YARN-6106.002.patch
>
>
> How 'allowPreemptionFrom' works is discussed in YARN-5831. Basically, if the 
> parent queue is non-preemptable, the children must be non-preemptable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3637) Handle localization sym-linking correctly at the YARN level

2017-01-20 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832580#comment-15832580
 ] 

Chris Trezzo commented on YARN-3637:


bq. It might be better to overload the use() method instead of replacing it.

[~templedf] Thinking about your previous comment some more, I may have missed 
your point the first time. I now realize that the overridden use method can 
simply honor the fragment portion of the url. If there is no fragment, then we 
can just use the original path's name as a new fragment to preserve the 
resource name. This can provide the same functionality without the extra 
parameter. I will fix the patch and post a new version. Let me know if you had 
something different in mind. Thanks again!

> Handle localization sym-linking correctly at the YARN level
> ---
>
> Key: YARN-3637
> URL: https://issues.apache.org/jira/browse/YARN-3637
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-3637-trunk.001.patch, YARN-3637-trunk.002.patch
>
>
> The shared cache needs to handle resource sym-linking at the YARN layer. 
> Currently, we let the application layer (i.e. mapreduce) handle this, but it 
> is probably better for all applications if it is handled transparently.
> Here is the scenario:
> Imagine two separate jars (with unique checksums) that have the same name 
> job.jar.
> They are stored in the shared cache as two separate resources:
> checksum1/job.jar
> checksum2/job.jar
> A new application tries to use both of these resources, but internally refers 
> to them as different names:
> foo.jar maps to checksum1
> bar.jar maps to checksum2
> When the shared cache returns the path to the resources, both resources are 
> named the same (i.e. job.jar). Because of this, when the resources are 
> localized one of them clobbers the other. This is because both symlinks in 
> the container_id directory are the same name (i.e. job.jar) even though they 
> point to two separate resource directories.
> Originally we tackled this in the MapReduce client by using the fragment 
> portion of the resource url. This, however, seems like something that should 
> be solved at the YARN layer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5966) AMRMClient changes to support ExecutionType update

2017-01-20 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832570#comment-15832570
 ] 

Subru Krishnan commented on YARN-5966:
--

Thanks [~asuresh] for addressing my feedback.

I looked at the updated patch and I feel {{requestContainerUpdate}} should 
taken in (only) {{UpdateContainerRequest}} as that will align the API with all 
the work you did earlier and also I feel it's more robust.

If you agree with the above, additionally we should deprecate 
{{requestContainerResourceChange}}.

> AMRMClient changes to support ExecutionType update
> --
>
> Key: YARN-5966
> URL: https://issues.apache.org/jira/browse/YARN-5966
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5966.001.patch, YARN-5966.002.patch, 
> YARN-5966.003.patch, YARN-5966.004.patch, YARN-5966.wip.001.patch
>
>
> {{AMRMClient}} changes to support change of container ExecutionType



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6106) Document FairScheduler 'allowPreemptionFrom' queue property

2017-01-20 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832498#comment-15832498
 ] 

Yufei Gu commented on YARN-6106:


Thanks [~rchiang] for quick turn around. Yes, we will waiting for YARN-6076. 

> Document FairScheduler 'allowPreemptionFrom' queue property
> ---
>
> Key: YARN-6106
> URL: https://issues.apache.org/jira/browse/YARN-6106
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6106.001.patch, YARN-6106.002.patch
>
>
> How 'allowPreemptionFrom' works is discussed in YARN-5831. Basically, if the 
> parent queue is non-preemptable, the children must be non-preemptable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4675) Reorganize TimeClientImpl into TimeClientV1Impl and TimeClientV2Impl

2017-01-20 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832483#comment-15832483
 ] 

Sangjin Lee commented on YARN-4675:
---

Thanks [~Naganarasimha]! I went over the latest patch, and have some comments. 
Quite a few comments overlap with [~gtCarrera9]'s comments.

(TimelineClient.java)
- l.57: agree with [~gtCarrera9] that we want to make the constructor protected

(TimelineV2Client.java)
- I would move {{contextAppId}} to {{TimelineV2ClientImpl}}. It is used solely 
internally by {{TimelineV2ClientImpl}} and it should be defined there.
- As a follow-up, we should drop the {{appId}} argument from the 
{{TimelineV2Client}} constructor
- As a follow-up, we can drop the {{setContextAppId}} and {{getContextAppId}} 
methods
- l.37: The blurb "Create a timeline client" sounds strange. We should remove 
that sentence once it's moved.
- l.58: make this constructor protected too
- l.78,96: nit: let's use imports now that there is no ambiguity

(TimelineClientImpl.java)
- we should do a strong check of the timeline service version in 
{{serviceInit()}} not to accept an invalid version
- l.89: this should be removed
- we should remove {{pollTimelineServiceAddress()}}
- we should remove {{initConnConfigurator()}}
- we should remove {{initSslConnConfigurator()}}
- we should remove {{serviceRetryInterval}}
- we should remove {{DEFAULT_TIMEOUT_CONN_CONFIGURATOR}}
- we should remove {{setTimeouts()}}
- we should remove {{DEFAULT_SOCKET_TIMEOUT}}
- {{TimelineClientConnectionRetry}} and {{TimelineJerseyRetryFilter}} are 
duplicated with {{TimelineServiceHelper}}; we should determine where we need 
them and remove duplication; I would remove them from {{TimelineServiceHelper}} 
unless we think we will use them later for v.2
- l.562: we should drop the {{boolean v2}} argument
- we don't need a public {{setTimelineServiceAddress()}} method in v.1.x; I 
would simply inline it
- in {{putEntities()}}, we can remove the check if we have it in 
{{serviceInit()}}; were we always checking for == 1.5 by the way? So we don't 
support 1.0 any more?

(TimelineServiceHelper.java)
- we should remove {{TimelineClientConnectionRetry}} and 
{{TimelineJerseyRetryFilter}}

(TimelineV2ClientImpl.java)
- l.88: we should remove {{connectionRetry}}
- l.168: we should drop the {{boolean v2}} argument

(JobHistoryEventHandler.java)
- i believe we need to touch {{TestJobHistoryEventHandler}} too (the 
{{JHEvenHandlerForTest}} class)
- l.332, 458: super nit: space before the curly brace

(TestJobHistoryEventHandler.java)
- is this all about testing timeline service v.1? note that 
{{mockAppContext()}} now returns a v.1 client only


> Reorganize TimeClientImpl into TimeClientV1Impl and TimeClientV2Impl
> 
>
> Key: YARN-4675
> URL: https://issues.apache.org/jira/browse/YARN-4675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: YARN-4675.v2.002.patch, YARN-4675.v2.003.patch, 
> YARN-4675-YARN-2928.v1.001.patch
>
>
> We need to reorganize TimeClientImpl into TimeClientV1Impl ,  
> TimeClientV2Impl and if required a base class, so that its clear which part 
> of the code belongs to which version and thus better maintainable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6106) Document FairScheduler 'allowPreemptionFrom' queue property

2017-01-20 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-6106:
-
Summary: Document FairScheduler 'allowPreemptionFrom' queue property  (was: 
Document Fair Scheduler 'allowPreemptionFrom' queue property)

> Document FairScheduler 'allowPreemptionFrom' queue property
> ---
>
> Key: YARN-6106
> URL: https://issues.apache.org/jira/browse/YARN-6106
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Minor
> Attachments: YARN-6106.001.patch, YARN-6106.002.patch
>
>
> How 'allowPreemptionFrom' works is discussed in YARN-5831. Basically, if the 
> parent queue is non-preemptable, the children must be non-preemptable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6106) Document Fair Scheduler queue property 'allowPreemptionFrom'

2017-01-20 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-6106:
-
Summary: Document Fair Scheduler queue property 'allowPreemptionFrom'  
(was: Add doc for tag 'allowPreemptionFrom' in Fair Scheduler)

> Document Fair Scheduler queue property 'allowPreemptionFrom'
> 
>
> Key: YARN-6106
> URL: https://issues.apache.org/jira/browse/YARN-6106
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Minor
> Attachments: YARN-6106.001.patch, YARN-6106.002.patch
>
>
> How 'allowPreemptionFrom' works is discussed in YARN-5831. Basically, if the 
> parent queue is non-preemptable, the children must be non-preemptable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6106) Document Fair Scheduler 'allowPreemptionFrom' queue property

2017-01-20 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-6106:
-
Summary: Document Fair Scheduler 'allowPreemptionFrom' queue property  
(was: Document Fair Scheduler queue property 'allowPreemptionFrom')

> Document Fair Scheduler 'allowPreemptionFrom' queue property
> 
>
> Key: YARN-6106
> URL: https://issues.apache.org/jira/browse/YARN-6106
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Minor
> Attachments: YARN-6106.001.patch, YARN-6106.002.patch
>
>
> How 'allowPreemptionFrom' works is discussed in YARN-5831. Basically, if the 
> parent queue is non-preemptable, the children must be non-preemptable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5230) Document FairScheduler's allowPreemptionFrom flag

2017-01-20 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang resolved YARN-5230.
--
Resolution: Duplicate

> Document FairScheduler's allowPreemptionFrom flag
> -
>
> Key: YARN-5230
> URL: https://issues.apache.org/jira/browse/YARN-5230
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, fairscheduler
>Affects Versions: 2.9.0
>Reporter: Grant Sohn
>Assignee: Karthik Kambatla
>Priority: Minor
>
> Feature added in https://issues.apache.org/jira/browse/YARN-4462 is not 
> documented in the Hadoop: Fair Scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6106) Add doc for tag 'allowPreemptionFrom' in Fair Scheduler

2017-01-20 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832428#comment-15832428
 ] 

Ray Chiang commented on YARN-6106:
--

Looks good to me.  +1.

Will commit this soon.


> Add doc for tag 'allowPreemptionFrom' in Fair Scheduler
> ---
>
> Key: YARN-6106
> URL: https://issues.apache.org/jira/browse/YARN-6106
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Minor
> Attachments: YARN-6106.001.patch, YARN-6106.002.patch
>
>
> How 'allowPreemptionFrom' works is discussed in YARN-5831. Basically, if the 
> parent queue is non-preemptable, the children must be non-preemptable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3610) FairScheduler: Add steady-fair-shares to the REST API documentation

2017-01-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832418#comment-15832418
 ] 

Hadoop QA commented on YARN-3610:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-3610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12848606/YARN-3610.001.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 4d90b13531d1 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d79c645 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14723/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FairScheduler: Add steady-fair-shares to the REST API documentation
> ---
>
> Key: YARN-3610
> URL: https://issues.apache.org/jira/browse/YARN-3610
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: documentation, fairscheduler
>Affects Versions: 2.7.0
>Reporter: Karthik Kambatla
>Assignee: Ray Chiang
> Attachments: YARN-3610.001.patch
>
>
> YARN-1050 adds documentation for FairScheduler REST API, but is missing the 
> steady-fair-share.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6106) Add doc for tag 'allowPreemptionFrom' in Fair Scheduler

2017-01-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832417#comment-15832417
 ] 

Hadoop QA commented on YARN-6106:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6106 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12848611/YARN-6106.002.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux adb9da8d3f8c 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d79c645 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14724/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add doc for tag 'allowPreemptionFrom' in Fair Scheduler
> ---
>
> Key: YARN-6106
> URL: https://issues.apache.org/jira/browse/YARN-6106
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Minor
> Attachments: YARN-6106.001.patch, YARN-6106.002.patch
>
>
> How 'allowPreemptionFrom' works is discussed in YARN-5831. Basically, if the 
> parent queue is non-preemptable, the children must be non-preemptable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3053) [Security] Review and implement security in ATS v.2

2017-01-20 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832407#comment-15832407
 ] 

Jason Lowe commented on YARN-3053:
--

Sorry to jump in relatively late on this.  A couple of questions came up while 
looking over the document.
# For things like Slider and other long-running services, there's going to be a 
need to regenerate the ATS token (i.e.: token rolling similar to what is 
already done for other YARN tokens).  It would be good to have the strategy for 
that explained.
# How are unmanaged AMs handled?  Do they have a collector, how do they 
authenticate, etc.?
# How are entites that are _not_ AMs handled?  For example, what if a service 
outside of YARN wants to post ATS events?  Do they have a collector, how do 
they authenticate, etc.?


> [Security] Review and implement security in ATS v.2
> ---
>
> Key: YARN-3053
> URL: https://issues.apache.org/jira/browse/YARN-3053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: ATSv2Authentication(draft).pdf
>
>
> Per design in YARN-2928, we want to evaluate and review the system for 
> security, and ensure proper security in the system.
> This includes proper authentication, token management, access control, and 
> any other relevant security aspects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6112) fsOpDurations.addUpdateCallDuration() should be independent to LOG level

2017-01-20 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6112:
---
Description: In the update thread of Fair Scheduler, the 
{{fsOpDurations.addUpdateCallDuration()}} records the duration of {{update()}}, 
it should be independent to LOG level. YARN-4752 put the it inside a 
{{LOG.isDebugEnabled()}} block. Not sure any particular reason to do that. cc 
[~kasha]  (was: In update thread of Fair Scheduler, the 
{{fsOpDurations.addUpdateCallDuration()}} records the duration of {{update()}}, 
it should be independent to LOG level. YARN-4752 put the it inside a 
{{LOG.isDebugEnabled()}} block. Not sure any particular reason to do that. cc 
[~kasha])

> fsOpDurations.addUpdateCallDuration() should be independent to LOG level
> 
>
> Key: YARN-6112
> URL: https://issues.apache.org/jira/browse/YARN-6112
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> In the update thread of Fair Scheduler, the 
> {{fsOpDurations.addUpdateCallDuration()}} records the duration of 
> {{update()}}, it should be independent to LOG level. YARN-4752 put the it 
> inside a {{LOG.isDebugEnabled()}} block. Not sure any particular reason to do 
> that. cc [~kasha]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6112) fsOpDurations.addUpdateCallDuration() should be independent to LOG level

2017-01-20 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6112:
---
Description: In update thread of Fair Scheduler, the 
{{fsOpDurations.addUpdateCallDuration()}} records the duration of {{update()}}, 
it should be independent to LOG level. YARN-4752 put the it inside a 
{{LOG.isDebugEnabled()}} block. Not sure any particular reason to do that. cc 
[~kasha]  (was: In update thread of Fair Scheduler, the 
{{fsOpDurations.addUpdateCallDuration()}} records the duration of {{update()}}, 
it should be independent to LOG level. )

> fsOpDurations.addUpdateCallDuration() should be independent to LOG level
> 
>
> Key: YARN-6112
> URL: https://issues.apache.org/jira/browse/YARN-6112
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> In update thread of Fair Scheduler, the 
> {{fsOpDurations.addUpdateCallDuration()}} records the duration of 
> {{update()}}, it should be independent to LOG level. YARN-4752 put the it 
> inside a {{LOG.isDebugEnabled()}} block. Not sure any particular reason to do 
> that. cc [~kasha]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6112) fsOpDurations.addUpdateCallDuration() should be independent to LOG level

2017-01-20 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-6112:
--

 Summary: fsOpDurations.addUpdateCallDuration() should be 
independent to LOG level
 Key: YARN-6112
 URL: https://issues.apache.org/jira/browse/YARN-6112
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Reporter: Yufei Gu
Assignee: Yufei Gu


In update thread of Fair Scheduler, the 
{{fsOpDurations.addUpdateCallDuration()}} records the duration of {{update()}}, 
it should be independent to LOG level. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6106) Add doc for tag 'allowPreemptionFrom' in Fair Scheduler

2017-01-20 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832294#comment-15832294
 ] 

Yufei Gu commented on YARN-6106:


Thanks [~rchiang] for the review. Uploaded the new patch.

> Add doc for tag 'allowPreemptionFrom' in Fair Scheduler
> ---
>
> Key: YARN-6106
> URL: https://issues.apache.org/jira/browse/YARN-6106
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Minor
> Attachments: YARN-6106.001.patch, YARN-6106.002.patch
>
>
> How 'allowPreemptionFrom' works is discussed in YARN-5831. Basically, if the 
> parent queue is non-preemptable, the children must be non-preemptable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6106) Add doc for tag 'allowPreemptionFrom' in Fair Scheduler

2017-01-20 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6106:
---
Attachment: YARN-6106.002.patch

> Add doc for tag 'allowPreemptionFrom' in Fair Scheduler
> ---
>
> Key: YARN-6106
> URL: https://issues.apache.org/jira/browse/YARN-6106
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Minor
> Attachments: YARN-6106.001.patch, YARN-6106.002.patch
>
>
> How 'allowPreemptionFrom' works is discussed in YARN-5831. Basically, if the 
> parent queue is non-preemptable, the children must be non-preemptable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6106) Add doc for tag 'allowPreemptionFrom' in Fair Scheduler

2017-01-20 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6106:
---
Attachment: YARN-6061.002.patch

> Add doc for tag 'allowPreemptionFrom' in Fair Scheduler
> ---
>
> Key: YARN-6106
> URL: https://issues.apache.org/jira/browse/YARN-6106
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Minor
> Attachments: YARN-6106.001.patch
>
>
> How 'allowPreemptionFrom' works is discussed in YARN-5831. Basically, if the 
> parent queue is non-preemptable, the children must be non-preemptable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6106) Add doc for tag 'allowPreemptionFrom' in Fair Scheduler

2017-01-20 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6106:
---
Attachment: (was: YARN-6061.002.patch)

> Add doc for tag 'allowPreemptionFrom' in Fair Scheduler
> ---
>
> Key: YARN-6106
> URL: https://issues.apache.org/jira/browse/YARN-6106
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Minor
> Attachments: YARN-6106.001.patch
>
>
> How 'allowPreemptionFrom' works is discussed in YARN-5831. Basically, if the 
> parent queue is non-preemptable, the children must be non-preemptable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5910) Support for multi-cluster delegation tokens

2017-01-20 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5910:
--
Attachment: YARN-5910.7.patch

new patch addressed all comments

> Support for multi-cluster delegation tokens
> ---
>
> Key: YARN-5910
> URL: https://issues.apache.org/jira/browse/YARN-5910
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: security
>Reporter: Clay B.
>Assignee: Jian He
>Priority: Minor
> Attachments: YARN-5910.01.patch, YARN-5910.2.patch, 
> YARN-5910.3.patch, YARN-5910.4.patch, YARN-5910.5.patch, YARN-5910.6.patch, 
> YARN-5910.7.patch
>
>
> As an administrator running many secure (kerberized) clusters, some which 
> have peer clusters managed by other teams, I am looking for a way to run jobs 
> which may require services running on other clusters. Particular cases where 
> this rears itself are running something as core as a distcp between two 
> kerberized clusters (e.g. {{hadoop --config /home/user292/conf/ distcp 
> hdfs://LOCALCLUSTER/user/user292/test.out 
> hdfs://REMOTECLUSTER/user/user292/test.out.result}}).
> Thanks to YARN-3021, once can run for a while but if the delegation token for 
> the remote cluster needs renewal the job will fail[1]. One can pre-configure 
> their {{hdfs-site.xml}} loaded by the YARN RM to know of all possible HDFSes 
> available but that requires coordination that is not always feasible, 
> especially as a cluster's peers grow into the tens of clusters or across 
> management teams. Ideally, one could have core systems configured this way 
> but jobs could also specify their own handling of tokens and management when 
> needed?
> [1]: Example stack trace when the RM is unaware of a remote service:
> 
> {code}
> 2016-03-23 14:59:50,528 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  application_1458441356031_3317 found existing hdfs token Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:REMOTECLUSTER, Ident: 
> (HDFS_DELEGATION_TOKEN token
>  10927 for user292)
> 2016-03-23 14:59:50,557 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Unable to add the application to the delegation token renewer.
> java.io.IOException: Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, 
> Service: ha-hdfs:REMOTECLUSTER, Ident: (HDFS_DELEGATION_TOKEN token 10927 for 
> user292)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:427)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$700(DelegationTokenRenewer.java:78)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:781)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:762)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> Caused by: java.io.IOException: Unable to map logical nameservice URI 
> 'hdfs://REMOTECLUSTER' to a NameNode. Local configuration does not have a 
> failover proxy provider configured.
> at org.apache.hadoop.hdfs.DFSClient$Renewer.getNNProxy(DFSClient.java:1164)
> at org.apache.hadoop.hdfs.DFSClient$Renewer.renew(DFSClient.java:1128)
> at org.apache.hadoop.security.token.Token.renew(Token.java:377)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:516)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.renewToken(DelegationTokenRenewer.java:511)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:425)
> ... 6 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5910) Support for multi-cluster delegation tokens

2017-01-20 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832277#comment-15832277
 ] 

Jian He commented on YARN-5910:
---

bq. It's confusing that the max size check is using capacity() but the error 
message uses position().
missed to change that..
bq. I'm curious on the reasoning for removing the assert for NEW state?
Because I feel that's obvious and not needed..
bq.  TestAppManager fails consistently for me with the patch applied and passes 
consistently without. Please investigate.
It's because the am containerLaunchContext is null in the UT which failed with 
NPE in the new code "submissionContext.getAMContainerSpec().getTokensConf()". I 
think it's ok to assume am ContainerLaunchContext being not null?  As I see  
other code does the same in this call path, like 
"submissionContext.getAMContainerSpec().getApplicationACLs()" in RMAppManager.

> Support for multi-cluster delegation tokens
> ---
>
> Key: YARN-5910
> URL: https://issues.apache.org/jira/browse/YARN-5910
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: security
>Reporter: Clay B.
>Assignee: Jian He
>Priority: Minor
> Attachments: YARN-5910.01.patch, YARN-5910.2.patch, 
> YARN-5910.3.patch, YARN-5910.4.patch, YARN-5910.5.patch, YARN-5910.6.patch
>
>
> As an administrator running many secure (kerberized) clusters, some which 
> have peer clusters managed by other teams, I am looking for a way to run jobs 
> which may require services running on other clusters. Particular cases where 
> this rears itself are running something as core as a distcp between two 
> kerberized clusters (e.g. {{hadoop --config /home/user292/conf/ distcp 
> hdfs://LOCALCLUSTER/user/user292/test.out 
> hdfs://REMOTECLUSTER/user/user292/test.out.result}}).
> Thanks to YARN-3021, once can run for a while but if the delegation token for 
> the remote cluster needs renewal the job will fail[1]. One can pre-configure 
> their {{hdfs-site.xml}} loaded by the YARN RM to know of all possible HDFSes 
> available but that requires coordination that is not always feasible, 
> especially as a cluster's peers grow into the tens of clusters or across 
> management teams. Ideally, one could have core systems configured this way 
> but jobs could also specify their own handling of tokens and management when 
> needed?
> [1]: Example stack trace when the RM is unaware of a remote service:
> 
> {code}
> 2016-03-23 14:59:50,528 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  application_1458441356031_3317 found existing hdfs token Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:REMOTECLUSTER, Ident: 
> (HDFS_DELEGATION_TOKEN token
>  10927 for user292)
> 2016-03-23 14:59:50,557 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Unable to add the application to the delegation token renewer.
> java.io.IOException: Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, 
> Service: ha-hdfs:REMOTECLUSTER, Ident: (HDFS_DELEGATION_TOKEN token 10927 for 
> user292)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:427)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$700(DelegationTokenRenewer.java:78)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:781)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:762)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> Caused by: java.io.IOException: Unable to map logical nameservice URI 
> 'hdfs://REMOTECLUSTER' to a NameNode. Local configuration does not have a 
> failover proxy provider configured.
> at org.apache.hadoop.hdfs.DFSClient$Renewer.getNNProxy(DFSClient.java:1164)
> at org.apache.hadoop.hdfs.DFSClient$Renewer.renew(DFSClient.java:1128)
> at org.apache.hadoop.security.token.Token.renew(Token.java:377)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:516)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at 
>

[jira] [Commented] (YARN-6099) Improve webservice to list aggregated log files

2017-01-20 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832265#comment-15832265
 ] 

Junping Du commented on YARN-6099:
--

Thanks [~xgong] for this code refactor effort. The code does look much cleaner!
Most changes looks fine to me. Except one: for new added class - 
PerContainerLogFileInfo, because we are put it in generic type container like: 
List, we should override hashCode() and equals(). Otherwise, it could behave 
strange in some corner cases.
Also, please check if findbug warning is related. If so, please fix it.
Other looks fine to me.

> Improve webservice to list aggregated log files
> ---
>
> Key: YARN-6099
> URL: https://issues.apache.org/jira/browse/YARN-6099
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6099.1.patch, YARN-6099.branch-2.v2.patch, 
> YARN-6099.trunk.1.patch, YARN-6099.trunk.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6106) Add doc for tag 'allowPreemptionFrom' in Fair Scheduler

2017-01-20 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832260#comment-15832260
 ] 

Ray Chiang commented on YARN-6106:
--

Thanks [~yufeigu], for the patch.  I recommend the following phrasing to clear 
up some grammar and ambiguities:

{quote}
determines whether the scheduler is allowed to preempt resources from the 
queue. The default is true. If a queue has this property set to false, this 
property will apply recursively to all child queues.
{quote}

> Add doc for tag 'allowPreemptionFrom' in Fair Scheduler
> ---
>
> Key: YARN-6106
> URL: https://issues.apache.org/jira/browse/YARN-6106
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Minor
> Attachments: YARN-6106.001.patch
>
>
> How 'allowPreemptionFrom' works is discussed in YARN-5831. Basically, if the 
> parent queue is non-preemptable, the children must be non-preemptable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3610) FairScheduler: Add steady-fair-shares to the REST API documentation

2017-01-20 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-3610:
-
Attachment: YARN-3610.001.patch

> FairScheduler: Add steady-fair-shares to the REST API documentation
> ---
>
> Key: YARN-3610
> URL: https://issues.apache.org/jira/browse/YARN-3610
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: documentation, fairscheduler
>Affects Versions: 2.7.0
>Reporter: Karthik Kambatla
>Assignee: Ray Chiang
> Attachments: YARN-3610.001.patch
>
>
> YARN-1050 adds documentation for FairScheduler REST API, but is missing the 
> steady-fair-share.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6106) Add doc for tag 'allowPreemptionFrom' in Fair Scheduler

2017-01-20 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6106:
---
Description: How 'allowPreemptionFrom' works is discussed in YARN-5831. 
Basically, if the parent queue is non-preemptable, the children must be 
non-preemptable.  (was: How ''if the parent queue is non-preemptable, the 
children must be non-preemptable.)

> Add doc for tag 'allowPreemptionFrom' in Fair Scheduler
> ---
>
> Key: YARN-6106
> URL: https://issues.apache.org/jira/browse/YARN-6106
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Minor
> Attachments: YARN-6106.001.patch
>
>
> How 'allowPreemptionFrom' works is discussed in YARN-5831. Basically, if the 
> parent queue is non-preemptable, the children must be non-preemptable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6106) Add doc for tag 'allowPreemptionFrom' in Fair Scheduler

2017-01-20 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6106:
---
Description: How ''if the parent queue is non-preemptable, the children 
must be non-preemptable.

> Add doc for tag 'allowPreemptionFrom' in Fair Scheduler
> ---
>
> Key: YARN-6106
> URL: https://issues.apache.org/jira/browse/YARN-6106
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Minor
> Attachments: YARN-6106.001.patch
>
>
> How ''if the parent queue is non-preemptable, the children must be 
> non-preemptable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5641) Localizer leaves behind tarballs after container is complete

2017-01-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832226#comment-15832226
 ] 

Hadoop QA commented on YARN-5641:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 37s{color} | {color:orange} root: The patch generated 6 new + 103 unchanged 
- 0 fixed = 109 total (was 103) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
40s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
16s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5641 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12848590/YARN-5641.009.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6c10a83a4777 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e015b56 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14722/artifact/patchprocess/diff-checkstyle-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14722/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14722/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http

[jira] [Commented] (YARN-5928) Move ATSv2 HBase backend code into a new module that is only dependent at runtime by yarn servers

2017-01-20 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832221#comment-15832221
 ] 

Andrew Wang commented on YARN-5928:
---

Perfect, thanks all!

> Move ATSv2 HBase backend code into a new module that is only dependent at 
> runtime by yarn servers
> -
>
> Key: YARN-5928
> URL: https://issues.apache.org/jira/browse/YARN-5928
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Fix For: 3.0.0-alpha2, YARN-5355
>
> Attachments: YARN-5928.01.patch, YARN-5928.02.patch, 
> YARN-5928.06.patch, YARN-5928-YARN-5355.02.patch, 
> YARN-5928-YARN-5355.03.patch, YARN-5928-YARN-5355.04.patch, 
> YARN-5928-YARN-5355.04.patch, YARN-5928-YARN-5355.05.patch, 
> YARN-5928-YARN-5355.06.patch, YARN-5928-YARN-5355.07.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5667) Move HBase backend code in ATS v2 into its separate module

2017-01-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved YARN-5667.
--
Resolution: Done

> Move HBase backend code in ATS v2  into its separate module
> ---
>
> Key: YARN-5667
> URL: https://issues.apache.org/jira/browse/YARN-5667
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
> Attachments: New module structure.png, part1.yarn5667.prelim.patch, 
> part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, 
> part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, 
> pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, 
> pt4.yarn5667.001.patch, pt5.yarn5667.001.patch, pt6.yarn5667.001.patch, 
> pt9.yarn5667.001.patch, yarn5667-001.tar.gz
>
>
> The HBase backend code currently lives along with the core ATS v2 code in 
> hadoop-yarn-server-timelineservice module. Because Resource Manager depends 
> on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM 
> module on HBase modules is introduced (HBase backend is pluggable, so we do 
> not need to directly pull in HBase jars). 
> In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop 
> 3, we encountered a circular dependency during our builds between HBase2.0 
> and Hadoop3 artifacts.
> {code}
> hadoop-mapreduce-client-common, hadoop-yarn-client, 
> hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, 
> hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, 
> hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common]
> {code}
> This jira proposes we move all HBase-backend-related code from 
> hadoop-yarn-server-timelineservice into its own module (possible name is 
> yarn-server-timelineservice-storage) so that core RM modules do not depend on 
> HBase modules any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5928) Move ATSv2 HBase backend code into a new module that is only dependent at runtime by yarn servers

2017-01-20 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832181#comment-15832181
 ] 

Haibo Chen commented on YARN-5928:
--

Thanks [~sjlee0] for your reviews!

> Move ATSv2 HBase backend code into a new module that is only dependent at 
> runtime by yarn servers
> -
>
> Key: YARN-5928
> URL: https://issues.apache.org/jira/browse/YARN-5928
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Fix For: 3.0.0-alpha2, YARN-5355
>
> Attachments: YARN-5928.01.patch, YARN-5928.02.patch, 
> YARN-5928.06.patch, YARN-5928-YARN-5355.02.patch, 
> YARN-5928-YARN-5355.03.patch, YARN-5928-YARN-5355.04.patch, 
> YARN-5928-YARN-5355.04.patch, YARN-5928-YARN-5355.05.patch, 
> YARN-5928-YARN-5355.06.patch, YARN-5928-YARN-5355.07.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5219) When an export var command fails in launch_container.sh, the full container launch should fail

2017-01-20 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5219:
--
Attachment: YARN-5219.005.patch

Ok. "set -e" could be set for all file so that any script error will be thrown 
out as an exception.

> When an export var command fails in launch_container.sh, the full container 
> launch should fail
> --
>
> Key: YARN-5219
> URL: https://issues.apache.org/jira/browse/YARN-5219
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Hitesh Shah
>Assignee: Sunil G
> Attachments: YARN-5219.001.patch, YARN-5219.003.patch, 
> YARN-5219.004.patch, YARN-5219.005.patch, YARN-5219-branch-2.001.patch
>
>
> Today, a container fails if certain files fail to localize. However, if 
> certain env vars fail to get setup properly either due to bugs in the yarn 
> application or misconfiguration, the actual process launch still gets 
> triggered. This results in either confusing error messages if the process 
> fails to launch or worse yet the process launches but then starts behaving 
> wrongly if the env var is used to control some behavioral aspects. 
> In this scenario, the issue was reproduced by trying to do export 
> abc="$\{foo.bar}" which is invalid as var names cannot contain "." in bash. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6082) Invalid REST api response for getApps since queueUsagePercentage is coming as INF

2017-01-20 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6082:
--
Attachment: YARN-6082.0001.patch

Attaching an initial patch. Also added a test case.

Since base capacity is 0, we can mark queueUsedPercentage as 0 I think. 
Thoughts? cc/ [~leftnoteasy] [~rohithsharma]

> Invalid REST api response for getApps since queueUsagePercentage is coming as 
> INF
> -
>
> Key: YARN-6082
> URL: https://issues.apache.org/jira/browse/YARN-6082
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-6082.0001.patch
>
>
> {noformat}
> 2017-01-11 07:17:11,475 WARN  ipc.Server (Server.java:run(2202)) - Large 
> response size 4476919 for call 
> org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplications from 
> 172.27.0.101:39950 Call#951474 Retry#0
> {noformat}
> Above server log is coming and JSON o/p was coming invalid
> For eg:
> {noformat}
> Unexpected token I in JSON at position 851
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5641) Localizer leaves behind tarballs after container is complete

2017-01-20 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-5641:
--
Attachment: YARN-5641.009.patch

bq. It should be either java.util.concurrent.ConcurrentHashMap (and store 
something bogus for the value) or just wrap it in a SynchronizedSet.
Wrapped the HashSet inside of a SynchronizedSet.

bq. I'm curious why the random and localDirs fields of 
ContainerLocalizerWrapper are marked protected but the others are not? Do they 
need to be marked protected?
This was a relic of when I was moving things around trying to limit scope. They 
don't need to be protected. 

Checkstyle also wants me to make all of the variables private and give them 
accessor methods instead of doing the wrapper.* access. I can fix this if you 
want. 

Attaching new patch

> Localizer leaves behind tarballs after container is complete
> 
>
> Key: YARN-5641
> URL: https://issues.apache.org/jira/browse/YARN-5641
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-5641.001.patch, YARN-5641.002.patch, 
> YARN-5641.003.patch, YARN-5641.004.patch, YARN-5641.005.patch, 
> YARN-5641.006.patch, YARN-5641.007.patch, YARN-5641.008.patch, 
> YARN-5641.009.patch
>
>
> The localizer sometimes fails to clean up extracted tarballs leaving large 
> footprints that persist on the nodes indefinitely. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5641) Localizer leaves behind tarballs after container is complete

2017-01-20 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832071#comment-15832071
 ] 

Jason Lowe commented on YARN-5641:
--

We should not be using org.eclipse.jetty.util.ConcurrentHashSet.  That's a 
dependency we do not want here.  It should be either 
java.util.concurrent.ConcurrentHashMap (and store something bogus for the 
value) or just wrap it in a SynchronizedSet.  The latter is probably preferable 
since I think concurrent is overkill here.  It's not super performance 
sensitive.

I'm curious why the random and localDirs fields of ContainerLocalizerWrapper 
are marked protected but the others are not?  Do they need to be marked 
protected?

> Localizer leaves behind tarballs after container is complete
> 
>
> Key: YARN-5641
> URL: https://issues.apache.org/jira/browse/YARN-5641
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-5641.001.patch, YARN-5641.002.patch, 
> YARN-5641.003.patch, YARN-5641.004.patch, YARN-5641.005.patch, 
> YARN-5641.006.patch, YARN-5641.007.patch, YARN-5641.008.patch
>
>
> The localizer sometimes fails to clean up extracted tarballs leaving large 
> footprints that persist on the nodes indefinitely. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5889) Improve user-limit calculation in capacity scheduler

2017-01-20 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832068#comment-15832068
 ] 

Eric Payne commented on YARN-5889:
--

bq. it never gets above {{queue-capacity * MULP}}
[~sunilg] and [~leftnoteasy], although this statement is true and I correctly 
stated the symptoms, I misdiagnosed the root cause in my [comments 
above|https://issues.apache.org/jira/secure/EditComment!default.jspa?id=13021186&commentId=15829005].
 Sorry for the confusion.

It appears that the root cause is that {{UM#User#assignContainer}} is not 
incrementing {{TotalResUsedByActiveUsers}} for the AM. The first time through 
{{assignContainer}} for a new app, the user isn't active yet, so the used 
resources count is not incremented. Consequently, 
{{resource-used-by-active-users}} is always smaller than the actual value, and 
never gets bigger than {{queue-capacity * MULP}}:
{code: title=UsersManager#computeUserLimit}
active-user-limit = max(
   resource-used-by-active-users / #active-users,
   queue-capacity * MULP
)
{code}

[~sunilg], do we need the {{isAnActiveUser}} checks in {{assignContainer}} and 
{{releaseContainer}}? I removed these checks in my local build and the 
application is able to use all of the queue and cluster.

> Improve user-limit calculation in capacity scheduler
> 
>
> Key: YARN-5889
> URL: https://issues.apache.org/jira/browse/YARN-5889
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-5889.0001.patch, 
> YARN-5889.0001.suggested.patchnotes, YARN-5889.0002.patch, 
> YARN-5889.0003.patch, YARN-5889.0004.patch, YARN-5889.0005.patch, 
> YARN-5889.v0.patch, YARN-5889.v1.patch, YARN-5889.v2.patch
>
>
> Currently user-limit is computed during every heartbeat allocation cycle with 
> a write lock. To improve performance, this tickets is focussing on moving 
> user-limit calculation out of heartbeat allocation flow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5547) NMLeveldbStateStore should be more tolerant of unknown keys

2017-01-20 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832050#comment-15832050
 ] 

Jason Lowe commented on YARN-5547:
--

Thanks for updating the patch!

We're still storing a redundant killed state upon recovery.

'unknownKeysuffix' s/b 'unknownKeySuffix'

'unknownKeysForConatainer' s/b 'unknownKeysForContainer'


> NMLeveldbStateStore should be more tolerant of unknown keys
> ---
>
> Key: YARN-5547
> URL: https://issues.apache.org/jira/browse/YARN-5547
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>Assignee: Ajith S
> Attachments: YARN-5547.01.patch, YARN-5547.02.patch, 
> YARN-5547.03.patch, YARN-5547.04.patch
>
>
> Whenever new keys are added to the NM state store it will break rolling 
> downgrades because the code will throw if it encounters an unrecognized key.  
> If instead it skipped unrecognized keys it could be simpler to continue 
> supporting rolling downgrades.  We need to define the semantics of 
> unrecognized keys when containers and apps are cleaned up, e.g.: we may want 
> to delete all keys underneath an app or container directory when it is being 
> removed from the state store to prevent leaking unrecognized keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5910) Support for multi-cluster delegation tokens

2017-01-20 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832014#comment-15832014
 ] 

Jason Lowe commented on YARN-5910:
--

Thanks for updating the patch!

Nit: I think it should be more clear that the regex in the documentation is 
just an example and not the default, e.g.: "This regex" s/b "For example the 
following regex".

DEFAULT_RM_DELEGATION_TOKEN_MAX_SIZE doesn't match yarn-default.xml.

It's confusing that the max size check is using capacity() but the error 
message uses position().

I'm curious on the reasoning for removing the assert for NEW state?

I was unable to reproduce the TestRMRestart and 
TestMRIntermediateDataEncryption failures with the patch, but TestAppManager 
fails consistently for me with the patch applied and passes consistently 
without.  Please investigate.

> Support for multi-cluster delegation tokens
> ---
>
> Key: YARN-5910
> URL: https://issues.apache.org/jira/browse/YARN-5910
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: security
>Reporter: Clay B.
>Assignee: Jian He
>Priority: Minor
> Attachments: YARN-5910.01.patch, YARN-5910.2.patch, 
> YARN-5910.3.patch, YARN-5910.4.patch, YARN-5910.5.patch, YARN-5910.6.patch
>
>
> As an administrator running many secure (kerberized) clusters, some which 
> have peer clusters managed by other teams, I am looking for a way to run jobs 
> which may require services running on other clusters. Particular cases where 
> this rears itself are running something as core as a distcp between two 
> kerberized clusters (e.g. {{hadoop --config /home/user292/conf/ distcp 
> hdfs://LOCALCLUSTER/user/user292/test.out 
> hdfs://REMOTECLUSTER/user/user292/test.out.result}}).
> Thanks to YARN-3021, once can run for a while but if the delegation token for 
> the remote cluster needs renewal the job will fail[1]. One can pre-configure 
> their {{hdfs-site.xml}} loaded by the YARN RM to know of all possible HDFSes 
> available but that requires coordination that is not always feasible, 
> especially as a cluster's peers grow into the tens of clusters or across 
> management teams. Ideally, one could have core systems configured this way 
> but jobs could also specify their own handling of tokens and management when 
> needed?
> [1]: Example stack trace when the RM is unaware of a remote service:
> 
> {code}
> 2016-03-23 14:59:50,528 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  application_1458441356031_3317 found existing hdfs token Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:REMOTECLUSTER, Ident: 
> (HDFS_DELEGATION_TOKEN token
>  10927 for user292)
> 2016-03-23 14:59:50,557 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Unable to add the application to the delegation token renewer.
> java.io.IOException: Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, 
> Service: ha-hdfs:REMOTECLUSTER, Ident: (HDFS_DELEGATION_TOKEN token 10927 for 
> user292)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:427)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$700(DelegationTokenRenewer.java:78)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:781)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:762)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> Caused by: java.io.IOException: Unable to map logical nameservice URI 
> 'hdfs://REMOTECLUSTER' to a NameNode. Local configuration does not have a 
> failover proxy provider configured.
> at org.apache.hadoop.hdfs.DFSClient$Renewer.getNNProxy(DFSClient.java:1164)
> at org.apache.hadoop.hdfs.DFSClient$Renewer.renew(DFSClient.java:1128)
> at org.apache.hadoop.security.token.Token.renew(Token.java:377)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:516)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security

[jira] [Commented] (YARN-5994) TestCapacityScheduler.testAMLimitUsage fails intermittently

2017-01-20 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15831927#comment-15831927
 ] 

Eric Badger commented on YARN-5994:
---

[~bibinchundatt], [~sunilg], could you please review the latest patch? Thanks!

> TestCapacityScheduler.testAMLimitUsage fails intermittently
> ---
>
> Key: YARN-5994
> URL: https://issues.apache.org/jira/browse/YARN-5994
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler-output.txt,
>  YARN-5994.001.patch
>
>
> {noformat}
> java.lang.AssertionError: app shouldn't be null
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertNotNull(Assert.java:621)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForState(MockRM.java:169)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.submitApp(MockRM.java:577)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.submitApp(MockRM.java:488)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.submitApp(MockRM.java:395)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler.verifyAMLimitForLeafQueue(TestCapacityScheduler.java:3389)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler.testAMLimitUsage(TestCapacityScheduler.java:3251)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-574) PrivateLocalizer does not support parallel resource download via ContainerLocalizer

2017-01-20 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15831792#comment-15831792
 ] 

Jason Lowe commented on YARN-574:
-

No, that's still racy.  There's a window where the worker thread has dequeued 
the task (i.e.: queue size is now zero) but has not set the flag yet to 
indicate it is active.  So we can still end up doing quick heartbeats and end 
up obtaining more work than we're prepared to handle in parallel.


> PrivateLocalizer does not support parallel resource download via 
> ContainerLocalizer
> ---
>
> Key: YARN-574
> URL: https://issues.apache.org/jira/browse/YARN-574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.6.0, 2.8.0, 2.7.1
>Reporter: Omkar Vinit Joshi
>Assignee: Ajith S
> Attachments: YARN-574.03.patch, YARN-574.04.patch, YARN-574.1.patch, 
> YARN-574.2.patch
>
>
> At present private resources will be downloaded in parallel only if multiple 
> containers request the same resource. However otherwise it will be serial. 
> The protocol between PrivateLocalizer and ContainerLocalizer supports 
> multiple downloads however it is not used and only one resource is sent for 
> downloading at a time.
> I think we can increase / assure parallelism (even for single container 
> requesting resource) for private/application resources by making multiple 
> downloads per ContainerLocalizer.
> Total Parallelism before
> = number of threads allotted for PublicLocalizer [public resource] + number 
> of containers[private and application resource]
> Total Parallelism after
> = number of threads allotted for PublicLocalizer [public resource] + number 
> of containers * max downloads per container [private and application resource]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6082) Invalid REST api response for getApps since queueUsagePercentage is coming as INF

2017-01-20 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15831405#comment-15831405
 ] 

Sunil G commented on YARN-6082:
---

Yes [~vinodkv]. You are correct. Both are un related.

*Analysis*

In {{SchedulerApplicationAttempt#getResourceUsageReport}}, QueueUsagePercent is 
calculated as follows

{code}
float queueUsagePerc = calc.divide(cluster, usedResourceClone, Resources
.multiply(cluster, queue.getQueueInfo(false, false).getCapacity()))
* 100;
{code}
Hence if  {{queue.getQueueInfo(false, false).getCapacity())}} is 0, we will get 
divide by 0 error (INF). hence *queueUsagePercentage* will come as INF and will 
corrupt the JSON o/p.



> Invalid REST api response for getApps since queueUsagePercentage is coming as 
> INF
> -
>
> Key: YARN-6082
> URL: https://issues.apache.org/jira/browse/YARN-6082
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Critical
>
> {noformat}
> 2017-01-11 07:17:11,475 WARN  ipc.Server (Server.java:run(2202)) - Large 
> response size 4476919 for call 
> org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplications from 
> 172.27.0.101:39950 Call#951474 Retry#0
> {noformat}
> Above server log is coming and JSON o/p was coming invalid
> For eg:
> {noformat}
> Unexpected token I in JSON at position 851
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-574) PrivateLocalizer does not support parallel resource download via ContainerLocalizer

2017-01-20 Thread Ajith S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15831378#comment-15831378
 ] 

Ajith S commented on YARN-574:
--

Thanks for the input [~jlowe] agree with the race condition mentioned
To simplify the approach, can we instead track the size of the 
{{LinkedBlockingQueue}} passed to the executor and avoid doing the quick 
heartbeats incase the {{LinkedBlockingQueue}} size is greater than zero.?

> PrivateLocalizer does not support parallel resource download via 
> ContainerLocalizer
> ---
>
> Key: YARN-574
> URL: https://issues.apache.org/jira/browse/YARN-574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.6.0, 2.8.0, 2.7.1
>Reporter: Omkar Vinit Joshi
>Assignee: Ajith S
> Attachments: YARN-574.03.patch, YARN-574.04.patch, YARN-574.1.patch, 
> YARN-574.2.patch
>
>
> At present private resources will be downloaded in parallel only if multiple 
> containers request the same resource. However otherwise it will be serial. 
> The protocol between PrivateLocalizer and ContainerLocalizer supports 
> multiple downloads however it is not used and only one resource is sent for 
> downloading at a time.
> I think we can increase / assure parallelism (even for single container 
> requesting resource) for private/application resources by making multiple 
> downloads per ContainerLocalizer.
> Total Parallelism before
> = number of threads allotted for PublicLocalizer [public resource] + number 
> of containers[private and application resource]
> Total Parallelism after
> = number of threads allotted for PublicLocalizer [public resource] + number 
> of containers * max downloads per container [private and application resource]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6111) [SLS] The realtimetrack.json is empty

2017-01-20 Thread YuJie Huang (JIRA)
YuJie Huang created YARN-6111:
-

 Summary: [SLS] The realtimetrack.json is empty 
 Key: YARN-6111
 URL: https://issues.apache.org/jira/browse/YARN-6111
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.7.3, 2.6.0
 Environment: ubuntu14.0.4 os
Reporter: YuJie Huang
 Fix For: 2.7.3


Hi guys,
I am trying to learn the use of SLS.
I would like to get the file realtimetrack.json, but this it only 
contains "[]" at the end of a simulation. This is the command I use to 
run the instance:

HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json 
--output-dir=sample-data 

All other files, including metrics, appears to be properly populated.I can also 
trace with web:http://localhost:10001/simulate
Can someone help?

Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5910) Support for multi-cluster delegation tokens

2017-01-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15831364#comment-15831364
 ] 

Hadoop QA commented on YARN-5910:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
36s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 57s{color} | {color:orange} root: The patch generated 29 new + 1445 
unchanged - 10 fixed = 1474 total (was 1455) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 908 unchanged - 5 fixed = 908 total (was 913) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
42s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {co

[jira] [Commented] (YARN-5910) Support for multi-cluster delegation tokens

2017-01-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15831365#comment-15831365
 ] 

Hadoop QA commented on YARN-5910:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
36s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 57s{color} | {color:orange} root: The patch generated 29 new + 1445 
unchanged - 10 fixed = 1474 total (was 1455) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 908 unchanged - 5 fixed = 908 total (was 913) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
33s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {co

[jira] [Updated] (YARN-6082) Invalid REST api response for getApps since queueUsagePercentage is coming as INF

2017-01-20 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6082:
--
Description: 
{noformat}
2017-01-11 07:17:11,475 WARN  ipc.Server (Server.java:run(2202)) - Large 
response size 4476919 for call 
org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplications from 
172.27.0.101:39950 Call#951474 Retry#0
{noformat}
Above server log is coming and JSON o/p was coming invalid

For eg:
{noformat}
Unexpected token I in JSON at position 851
{noformat}


  was:
{noformat}
2017-01-11 07:17:11,475 WARN  ipc.Server (Server.java:run(2202)) - Large 
response size 4476919 for call 
org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplications from 
172.27.0.101:39950 Call#951474 Retry#0
{noformat}

In such cases, json output will get cutoff and client will not get clean 
response.
For eg:
{noformat}
Unexpected token I in JSON at position 851
{noformat}


> Invalid REST api response for getApps since queueUsagePercentage is coming as 
> INF
> -
>
> Key: YARN-6082
> URL: https://issues.apache.org/jira/browse/YARN-6082
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Critical
>
> {noformat}
> 2017-01-11 07:17:11,475 WARN  ipc.Server (Server.java:run(2202)) - Large 
> response size 4476919 for call 
> org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplications from 
> 172.27.0.101:39950 Call#951474 Retry#0
> {noformat}
> Above server log is coming and JSON o/p was coming invalid
> For eg:
> {noformat}
> Unexpected token I in JSON at position 851
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6082) Invalid REST api response for getApps since queueUsagePercentage is coming as INF

2017-01-20 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6082:
--
Summary: Invalid REST api response for getApps since queueUsagePercentage 
is coming as INF  (was: Invalid REST api response for getApps as 
queueUsagePercentage is INF)

> Invalid REST api response for getApps since queueUsagePercentage is coming as 
> INF
> -
>
> Key: YARN-6082
> URL: https://issues.apache.org/jira/browse/YARN-6082
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Critical
>
> {noformat}
> 2017-01-11 07:17:11,475 WARN  ipc.Server (Server.java:run(2202)) - Large 
> response size 4476919 for call 
> org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplications from 
> 172.27.0.101:39950 Call#951474 Retry#0
> {noformat}
> In such cases, json output will get cutoff and client will not get clean 
> response.
> For eg:
> {noformat}
> Unexpected token I in JSON at position 851
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6082) Invalid REST api response for getApps as queueUsagePercentage is INF

2017-01-20 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G reassigned YARN-6082:
-

Assignee: Sunil G

> Invalid REST api response for getApps as queueUsagePercentage is INF
> 
>
> Key: YARN-6082
> URL: https://issues.apache.org/jira/browse/YARN-6082
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Critical
>
> {noformat}
> 2017-01-11 07:17:11,475 WARN  ipc.Server (Server.java:run(2202)) - Large 
> response size 4476919 for call 
> org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplications from 
> 172.27.0.101:39950 Call#951474 Retry#0
> {noformat}
> In such cases, json output will get cutoff and client will not get clean 
> response.
> For eg:
> {noformat}
> Unexpected token I in JSON at position 851
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6082) Invalid REST api response for getApps as queueUsagePercentage is INF

2017-01-20 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6082:
--
Summary: Invalid REST api response for getApps as queueUsagePercentage is 
INF  (was: Webservice connection gets cutoff when it has to send back a large 
response)

> Invalid REST api response for getApps as queueUsagePercentage is INF
> 
>
> Key: YARN-6082
> URL: https://issues.apache.org/jira/browse/YARN-6082
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Sunil G
>Priority: Critical
>
> {noformat}
> 2017-01-11 07:17:11,475 WARN  ipc.Server (Server.java:run(2202)) - Large 
> response size 4476919 for call 
> org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplications from 
> 172.27.0.101:39950 Call#951474 Retry#0
> {noformat}
> In such cases, json output will get cutoff and client will not get clean 
> response.
> For eg:
> {noformat}
> Unexpected token I in JSON at position 851
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org