[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-11-09 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653359#comment-15653359
 ] 

Karthik Kambatla commented on YARN-4597:


[~asuresh] - could you hold off until Friday evening? I ll try and get to it by 
then. Thanks.

> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
>  Labels: oct16-hard
> Attachments: YARN-4597.001.patch, YARN-4597.002.patch, 
> YARN-4597.003.patch, YARN-4597.004.patch, YARN-4597.005.patch, 
> YARN-4597.006.patch, YARN-4597.007.patch, YARN-4597.008.patch, 
> YARN-4597.009.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4218) Metric for resource*time that was preempted

2016-11-09 Thread Chang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang Li updated YARN-4218:
---
Attachment: YARN-4218-branch-2.003.patch

> Metric for resource*time that was preempted
> ---
>
> Key: YARN-4218
> URL: https://issues.apache.org/jira/browse/YARN-4218
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: YARN-4218-branch-2.003.patch, YARN-4218.006.patch, 
> YARN-4218.2.patch, YARN-4218.2.patch, YARN-4218.2.patch, YARN-4218.2.patch, 
> YARN-4218.3.patch, YARN-4218.4.patch, YARN-4218.5.patch, 
> YARN-4218.branch-2.2.patch, YARN-4218.branch-2.patch, YARN-4218.patch, 
> YARN-4218.trunk.2.patch, YARN-4218.trunk.3.patch, YARN-4218.trunk.patch, 
> YARN-4218.wip.patch, screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
> After YARN-415 we have the ability to track the resource*time footprint of a 
> job and preemption metrics shows how many containers were preempted on a job. 
> However we don't have a metric showing the resource*time footprint cost of 
> preemption. In other words, we know how many containers were preempted but we 
> don't have a good measure of how much work was lost as a result of preemption.
> We should add this metric so we can analyze how much work preemption is 
> costing on a grid and better track which jobs were heavily impacted by it. A 
> job that has 100 containers preempted that only lasted a minute each and were 
> very small is going to be less impacted than a job that only lost a single 
> container but that container was huge and had been running for 3 days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5453) FairScheduler#update may skip update demand resource of child queue/app if current demand reached maxResource

2016-11-09 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5453:
---
Fix Version/s: 3.0.0-alpha2

> FairScheduler#update may skip update demand resource of child queue/app if 
> current demand reached maxResource
> -
>
> Key: YARN-5453
> URL: https://issues.apache.org/jira/browse/YARN-5453
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: sandflee
>Assignee: sandflee
>  Labels: oct16-easy
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5453.01.patch, YARN-5453.02.patch, 
> YARN-5453.03.patch, YARN-5453.04.patch, YARN-5453.05.patch
>
>
> {code}
>   demand = Resources.createResource(0);
>   for (FSQueue childQueue : childQueues) {
> childQueue.updateDemand();
> Resource toAdd = childQueue.getDemand();
> demand = Resources.add(demand, toAdd);
> demand = Resources.componentwiseMin(demand, maxRes);
> if (Resources.equals(demand, maxRes)) {
>   break;
> }
>   }
> {code}
> if one singe queue's demand resource exceed maxRes,  the other queue's demand 
> resource will not update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5453) FairScheduler#update may skip update demand resource of child queue/app if current demand reached maxResource

2016-11-09 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653324#comment-15653324
 ] 

Karthik Kambatla commented on YARN-5453:


+1. Checking this in..

> FairScheduler#update may skip update demand resource of child queue/app if 
> current demand reached maxResource
> -
>
> Key: YARN-5453
> URL: https://issues.apache.org/jira/browse/YARN-5453
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: sandflee
>Assignee: sandflee
>  Labels: oct16-easy
> Attachments: YARN-5453.01.patch, YARN-5453.02.patch, 
> YARN-5453.03.patch, YARN-5453.04.patch, YARN-5453.05.patch
>
>
> {code}
>   demand = Resources.createResource(0);
>   for (FSQueue childQueue : childQueues) {
> childQueue.updateDemand();
> Resource toAdd = childQueue.getDemand();
> demand = Resources.add(demand, toAdd);
> demand = Resources.componentwiseMin(demand, maxRes);
> if (Resources.equals(demand, maxRes)) {
>   break;
> }
>   }
> {code}
> if one singe queue's demand resource exceed maxRes,  the other queue's demand 
> resource will not update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4218) Metric for resource*time that was preempted

2016-11-09 Thread Chang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang Li updated YARN-4218:
---
Attachment: YARN-4218.006.patch

> Metric for resource*time that was preempted
> ---
>
> Key: YARN-4218
> URL: https://issues.apache.org/jira/browse/YARN-4218
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: YARN-4218.006.patch, YARN-4218.2.patch, 
> YARN-4218.2.patch, YARN-4218.2.patch, YARN-4218.2.patch, YARN-4218.3.patch, 
> YARN-4218.4.patch, YARN-4218.5.patch, YARN-4218.branch-2.2.patch, 
> YARN-4218.branch-2.patch, YARN-4218.patch, YARN-4218.trunk.2.patch, 
> YARN-4218.trunk.3.patch, YARN-4218.trunk.patch, YARN-4218.wip.patch, 
> screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
> After YARN-415 we have the ability to track the resource*time footprint of a 
> job and preemption metrics shows how many containers were preempted on a job. 
> However we don't have a metric showing the resource*time footprint cost of 
> preemption. In other words, we know how many containers were preempted but we 
> don't have a good measure of how much work was lost as a result of preemption.
> We should add this metric so we can analyze how much work preemption is 
> costing on a grid and better track which jobs were heavily impacted by it. A 
> job that has 100 containers preempted that only lasted a minute each and were 
> very small is going to be less impacted than a job that only lost a single 
> container but that container was huge and had been running for 3 days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5864) Capacity Scheduler preemption for fragmented cluster

2016-11-09 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653315#comment-15653315
 ] 

Carlo Curino commented on YARN-5864:


[~wangda] I understand the need for this feature, but the general concern I 
have is with that the collection of features in CS have very poorly defined 
interactions, and worse they do violate each other invariants left, right and 
center. For example non-preemptable queues when in use break the fair 
over-capacity sharing semantics. Similarly locality and node labels have heavy 
and not fully clear redundancies, and user-limits / app priorities / request 
priorities / container types / etc... are further complicating this space. The 
mental model associated with the system is growing disproportionately for both 
users and operators, and this is a bad sign.

The new feature you propose seem to further push us down this slippery slope, 
where the semantics of what a user tenant gets for his/her money are very 
unclear. Up till before this feature the one invariant we had not violated yet 
was that, If I paid for capacity C, and I am within capacity C my containers 
will not be disturbed (regardless of other tenants desires). Now a queue may or 
may not be preempted within its capacity to accommodate some other queue large 
containers. 

This opens up many abuses, one that comes to mind:
 # I request a large container on node N1, 
 # preemption kicks out some other tenant, 
 # I get the container on N1, 
 # I reduce the size of the container on N1 to a normal size containers... 
 # (I repeat till I grab all the nodes I want).  
Through this little trick a nasty user can simply bully his way into the nodes 
he/she wants, regardless of the container size he really needs, and his/her 
capacity standing w.r.t. other tenants. I am sure if we squint hard enough 
there is a combination of configurations that can prevent this, but the general 
concern remains.


Bottomline, I don't want to stand in the way of progress and important 
features, but I don't see this ending well. 

I see two paths forward:
# a deep refactoring to make the code manageable, and an analysis that produces 
crisp semantics associated with each of the N! combination of our 
features---this should ideally lead to cutting all "nice on the box" features 
that are rarely/never used, or have undefined semantics. 
# Keep CS for legacy, and create a new -based 
scheduler for which we can prove clear semantics, and that allows 
users/operators to have a simple mental model of what the system is supposed to 
deliver.

(2) is my favorite option if I had a choice.


> Capacity Scheduler preemption for fragmented cluster 
> -
>
> Key: YARN-5864
> URL: https://issues.apache.org/jira/browse/YARN-5864
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5864.poc-0.patch
>
>
> YARN-4390 added preemption for reserved container. However, we found one case 
> that large container cannot be allocated even if all queues are under their 
> limit.
> For example, we have:
> {code}
> Two queues, a and b, capacity 50:50 
> Two nodes: n1 and n2, each of them have 50 resource 
> Now queue-a uses 10 on n1 and 10 on n2
> queue-b asks for one single container with resource=45. 
> {code} 
> The container could be reserved on any of the host, but no preemption will 
> happen because all queues are under their limits. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-11-09 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653307#comment-15653307
 ] 

Bibin A Chundatt commented on YARN-5545:


Thank you [~sunilg] for review.
Attaching patch after handling both testcase fix and condition check

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Attachments: YARN-5545.0001.patch, YARN-5545.0002.patch, 
> YARN-5545.0003.patch, YARN-5545.0005.patch, YARN-5545.0006.patch, 
> YARN-5545.0007.patch, YARN-5545.0008.patch, YARN-5545.004.patch, 
> capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:286)
>   at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:296)
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:301)
>   ... 25 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-11-09 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5545:
---
Attachment: YARN-5545.0008.patch

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Attachments: YARN-5545.0001.patch, YARN-5545.0002.patch, 
> YARN-5545.0003.patch, YARN-5545.0005.patch, YARN-5545.0006.patch, 
> YARN-5545.0007.patch, YARN-5545.0008.patch, YARN-5545.004.patch, 
> capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:286)
>   at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:296)
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:301)
>   ... 25 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5552) Add Builder methods for common yarn API records

2016-11-09 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653301#comment-15653301
 ] 

Karthik Kambatla commented on YARN-5552:


Eye-balled the changes and they look good to me. Defer to [~leftnoteasy] for a 
more thorough review. 

> Add Builder methods for common yarn API records
> ---
>
> Key: YARN-5552
> URL: https://issues.apache.org/jira/browse/YARN-5552
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Tao Jie
> Attachments: YARN-5552.000.patch, YARN-5552.001.patch, 
> YARN-5552.002.patch, YARN-5552.003.patch, YARN-5552.004.patch, 
> YARN-5552.005.patch, YARN-5552.006.patch, YARN-5552.007.patch, 
> YARN-5552.008.patch, YARN-5552.009.patch
>
>
> Currently yarn API records such as ResourceRequest, AllocateRequest/Respone 
> as well as AMRMClient.ContainerRequest have multiple constructors / 
> newInstance methods. This makes it very difficult to add new fields to these 
> records.
> It would probably be better if we had Builder classes for many of these 
> records, which would make evolution of these records a bit easier.
> (suggested by [~kasha])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5600) Add a parameter to ContainerLaunchContext to emulate yarn.nodemanager.delete.debug-delay-sec on a per-application basis

2016-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653290#comment-15653290
 ] 

Hadoop QA commented on YARN-5600:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 55s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 577 unchanged - 7 fixed = 578 total (was 584) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
28s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
31s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-5600 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838299/YARN-5600.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 35c7d9d948ac 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c202a10 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13852/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-5856) Unnecessary duplicate start container request sent to NM State store

2016-11-09 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653268#comment-15653268
 ] 

Varun Saxena commented on YARN-5856:


Thanks [~Naganarasimha] for the commit.

> Unnecessary duplicate start container request sent to NM State store
> 
>
> Key: YARN-5856
> URL: https://issues.apache.org/jira/browse/YARN-5856
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5856.01.patch
>
>
> In ContainerManagerImpl#startContainerInternal, a duplicate store container 
> request is sent to NM State store which is unnecessary.
> {code}
> this.context.getNMStateStore().storeContainer(containerId,
> containerTokenIdentifier.getVersion(), request);
> dispatcher.getEventHandler().handle(
>   new ApplicationContainerInitEvent(container));
> this.context.getNMStateStore().storeContainer(containerId,
> containerTokenIdentifier.getVersion(), request);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5843) Incorrect documentation for timeline service entityType/events REST end points

2016-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653251#comment-15653251
 ] 

Hudson commented on YARN-5843:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10810 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10810/])
YARN-5843. Incorrect documentation for timeline service (varunsaxena: rev 
c8bc7a84758f360849b96c2e19c8c41b7e9dbb65)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md


> Incorrect documentation for timeline service entityType/events REST end points
> --
>
> Key: YARN-5843
> URL: https://issues.apache.org/jira/browse/YARN-5843
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Minor
> Attachments: YARN-5843.0001.patch, YARN-5843.0002.patch
>
>
> http(s):// address:port>/ws/v1/timeline/{entityType}/events
> {noformat}
> entityIds - The entity IDs to retrieve events for.
> limit - A limit on the number of events to return for each entity. If null, 
> defaults to 100 events per entity.
> windowStart - If not null, retrieves only events later than the given time 
> (exclusive)
> windowEnd - If not null, retrieves only events earlier than the given time 
> (inclusive)
> eventTypes - Restricts the events returned to the given types. If null, 
> events of all types will be returned.
> {noformat}
> parameter should be
> *entityId*
> *eventType*
> Mention  comma separated *entityId* and *entitytype* for multiple arguments



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5866) Fix few issues reported by jshint in new YARN UI

2016-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653247#comment-15653247
 ] 

Hadoop QA commented on YARN-5866:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-5866 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838300/YARN-5866.001.patch |
| Optional Tests |  asflicense  |
| uname | Linux b3524c3beb00 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c8bc7a8 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/13853/artifact/patchprocess/whitespace-eol.txt
 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13853/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix few issues reported by jshint in new YARN UI
> 
>
> Key: YARN-5866
> URL: https://issues.apache.org/jira/browse/YARN-5866
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-5866.001.patch
>
>
> There are few minor issues reported by jshint (javascript lint tool).
> This jira is to track and fix those issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5819) Verify fairshare and minshare preemption

2016-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653242#comment-15653242
 ] 

Hadoop QA commented on YARN-5819:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
12s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 30 unchanged - 0 fixed = 31 total (was 30) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 42m 
29s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5819 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838296/yarn-5819.YARN-4752.4.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 206e4a7ee86a 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-4752 / 9568f41 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13851/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13851/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13851/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Verify fairshare and minshare preemption
> 
>
> Key: YARN-5819
>

[jira] [Commented] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-11-09 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653235#comment-15653235
 ] 

Sunil G commented on YARN-5545:
---

[~bibinchundatt]
In {{isSystemAppsLimitReached}}, less than or equal to check is used. Hence I 
feel one more extra  applications can get submitted. Could you please help to 
confirm that.

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Attachments: YARN-5545.0001.patch, YARN-5545.0002.patch, 
> YARN-5545.0003.patch, YARN-5545.0005.patch, YARN-5545.0006.patch, 
> YARN-5545.0007.patch, YARN-5545.004.patch, capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:286)
>   at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:296)
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:301)
>   ... 25 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (YARN-5862) TestDiskFailures.testLocalDirsFailures failed

2016-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653212#comment-15653212
 ] 

Hudson commented on YARN-5862:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10809 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10809/])
YARN-5862. TestDiskFailures.testLocalDirsFailures failed (Yufei Gu via 
(varunsaxena: rev c202a10923a46a6e7f7f518e6e3dbb6545dbb971)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestDiskFailures.java


> TestDiskFailures.testLocalDirsFailures failed
> -
>
> Key: YARN-5862
> URL: https://issues.apache.org/jira/browse/YARN-5862
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5862.001.patch
>
>
> {code}
> java.util.NoSuchElementException: null 
> at 
> java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
>  
> at 
> java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
>  
> at 
> org.apache.hadoop.yarn.server.TestDiskFailures.verifyDisksHealth(TestDiskFailures.java:247)
>  
> at 
> org.apache.hadoop.yarn.server.TestDiskFailures.testDirsFailures(TestDiskFailures.java:179)
>  
> at 
> org.apache.hadoop.yarn.server.TestDiskFailures.testLocalDirsFailures(TestDiskFailures.java:99)
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5866) Fix few issues reported by jshint in new YARN UI

2016-11-09 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-5866:
---
Attachment: YARN-5866.001.patch

> Fix few issues reported by jshint in new YARN UI
> 
>
> Key: YARN-5866
> URL: https://issues.apache.org/jira/browse/YARN-5866
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-5866.001.patch
>
>
> There are few minor issues reported by jshint (javascript lint tool).
> This jira is to track and fix those issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5862) TestDiskFailures.testLocalDirsFailures failed

2016-11-09 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653182#comment-15653182
 ] 

Yufei Gu commented on YARN-5862:


Thanks [~varun_saxena] for the review and commit.

> TestDiskFailures.testLocalDirsFailures failed
> -
>
> Key: YARN-5862
> URL: https://issues.apache.org/jira/browse/YARN-5862
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5862.001.patch
>
>
> {code}
> java.util.NoSuchElementException: null 
> at 
> java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
>  
> at 
> java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
>  
> at 
> org.apache.hadoop.yarn.server.TestDiskFailures.verifyDisksHealth(TestDiskFailures.java:247)
>  
> at 
> org.apache.hadoop.yarn.server.TestDiskFailures.testDirsFailures(TestDiskFailures.java:179)
>  
> at 
> org.apache.hadoop.yarn.server.TestDiskFailures.testLocalDirsFailures(TestDiskFailures.java:99)
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653165#comment-15653165
 ] 

Hadoop QA commented on YARN-5545:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 41m 20s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationLimits |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-5545 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838291/YARN-5545.0007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7b3d10941fc9 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 71adf44 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13850/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13850/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13850/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: 

[jira] [Updated] (YARN-5600) Add a parameter to ContainerLaunchContext to emulate yarn.nodemanager.delete.debug-delay-sec on a per-application basis

2016-11-09 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-5600:
-
Attachment: YARN-5600.005.patch

Addressing comments.

> Add a parameter to ContainerLaunchContext to emulate 
> yarn.nodemanager.delete.debug-delay-sec on a per-application basis
> ---
>
> Key: YARN-5600
> URL: https://issues.apache.org/jira/browse/YARN-5600
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Miklos Szegedi
>  Labels: oct16-medium
> Attachments: YARN-5600.000.patch, YARN-5600.001.patch, 
> YARN-5600.002.patch, YARN-5600.003.patch, YARN-5600.004.patch, 
> YARN-5600.005.patch
>
>
> To make debugging application launch failures simpler, I'd like to add a 
> parameter to the CLC to allow an application owner to request delayed 
> deletion of the application's launch artifacts.
> This JIRA solves largely the same problem as YARN-5599, but for cases where 
> ATS is not in use, e.g. branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5600) Add a parameter to ContainerLaunchContext to emulate yarn.nodemanager.delete.debug-delay-sec on a per-application basis

2016-11-09 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653151#comment-15653151
 ] 

Miklos Szegedi commented on YARN-5600:
--

Thank you, for the useful comments [~templedf]!
I fixed most of the requests, I have just a few comments:
"I'd rather see you divide by 1000 and add 1. It's a little less clever/more 
obvious."
Unfortunately, it does not give the same results. It will add an unnecessary 
second, when the number is dividable by 1000 and that is not what I need.
"In TestContainerManager, verifyContainerDir() and verifyAppDir() have a lot of 
common code. Maybe pull it out into another method?"
There are sporadic differences that make it difficult to be one single 
function. They used to be one function and I changed it because I think this is 
still more readable this way.
"In waitForApplicationDirDeleted() you might be better off with monotonic time."
Can you be more specific?

> Add a parameter to ContainerLaunchContext to emulate 
> yarn.nodemanager.delete.debug-delay-sec on a per-application basis
> ---
>
> Key: YARN-5600
> URL: https://issues.apache.org/jira/browse/YARN-5600
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Miklos Szegedi
>  Labels: oct16-medium
> Attachments: YARN-5600.000.patch, YARN-5600.001.patch, 
> YARN-5600.002.patch, YARN-5600.003.patch, YARN-5600.004.patch
>
>
> To make debugging application launch failures simpler, I'd like to add a 
> parameter to the CLC to allow an application owner to request delayed 
> deletion of the application's launch artifacts.
> This JIRA solves largely the same problem as YARN-5599, but for cases where 
> ATS is not in use, e.g. branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5866) Fix few issues reported by jshint in new YARN UI

2016-11-09 Thread Akhil PB (JIRA)
Akhil PB created YARN-5866:
--

 Summary: Fix few issues reported by jshint in new YARN UI
 Key: YARN-5866
 URL: https://issues.apache.org/jira/browse/YARN-5866
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Reporter: Akhil PB
Assignee: Akhil PB


There are few minor issues reported by jshint (javascript lint tool).
This jira is to track and fix those issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5865) Retrospect updateApplicationPriority api to handle state store exception in align with YARN-5611

2016-11-09 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5865:
--
Description: Post YARN-5611, revisit dynamic update of application priority 
logic with respect to state store error handling.  (was: Post YARN-5611, 
revisit dynamic update of application priority logic.)

> Retrospect updateApplicationPriority api to handle state store exception in 
> align with YARN-5611
> 
>
> Key: YARN-5865
> URL: https://issues.apache.org/jira/browse/YARN-5865
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sunil G
>Assignee: Sunil G
>
> Post YARN-5611, revisit dynamic update of application priority logic with 
> respect to state store error handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4206) Add life time value in Application report and web UI

2016-11-09 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653125#comment-15653125
 ] 

Rohith Sharma K S commented on YARN-4206:
-

Thanks [~gsaha] for bringing out point remaining time for an application. I 
think application-report should contains absoluteTimeout value i.e when it gets 
expired.  To the client, there could be new API 
{{ApplicationClientProtocol#getApplicationRemainingTime(ApplicationId)}} which 
gets remaining timeout in seconds. Thoughts? [~sunilg] [~jianhe] [~vinodkv] 

> Add life time value in Application report and web UI
> 
>
> Key: YARN-4206
> URL: https://issues.apache.org/jira/browse/YARN-4206
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: nijel
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5819) Verify fairshare and minshare preemption

2016-11-09 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5819:
---
Attachment: yarn-5819.YARN-4752.4.patch

Patch v4 that drops unused imports. 

> Verify fairshare and minshare preemption
> 
>
> Key: YARN-5819
> URL: https://issues.apache.org/jira/browse/YARN-5819
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-5819.YARN-4752.1.patch, 
> yarn-5819.YARN-4752.2.patch, yarn-5819.YARN-4752.3.patch, 
> yarn-5819.YARN-4752.4.patch
>
>
> JIRA to track the unit test(s) verifying both fairshare and minshare 
> preemption. The tests should verify:
> # preemption within a single leaf queue
> # preemption between sibling leaf queues
> # preemption between non-sibling leaf queues
> # {{allowPreemption = false}} should disallow preemption from a queue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-09 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653069#comment-15653069
 ] 

Rohith Sharma K S commented on YARN-5611:
-

thanks Jian and Sunil for thorough review.. :-)

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-hard
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-5611.patch, 0002-YARN-5611.patch, 
> 0003-YARN-5611.patch, YARN-5611.0004.patch, YARN-5611.0005.patch, 
> YARN-5611.0006.patch, YARN-5611.0007.patch, YARN-5611.0008.patch, 
> YARN-5611.0009.patch, YARN-5611.0010.patch, YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-11-09 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5545:
---
Attachment: YARN-5545.0007.patch

Attaching patch handling comments from naga

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Attachments: YARN-5545.0001.patch, YARN-5545.0002.patch, 
> YARN-5545.0003.patch, YARN-5545.0005.patch, YARN-5545.0006.patch, 
> YARN-5545.0007.patch, YARN-5545.004.patch, capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:286)
>   at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:296)
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:301)
>   ... 25 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5865) Retrospect updateApplicationPriority api to handle state store exception in align with YARN-5611

2016-11-09 Thread Sunil G (JIRA)
Sunil G created YARN-5865:
-

 Summary: Retrospect updateApplicationPriority api to handle state 
store exception in align with YARN-5611
 Key: YARN-5865
 URL: https://issues.apache.org/jira/browse/YARN-5865
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Sunil G
Assignee: Sunil G


Post YARN-5611, revisit dynamic update of application priority logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5819) Verify fairshare and minshare preemption

2016-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652915#comment-15652915
 ] 

Hadoop QA commented on YARN-5819:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
24s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 6 new + 30 unchanged - 0 fixed = 36 total (was 30) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m  
0s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5819 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838282/yarn-5819.YARN-4752.3.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 95b697768340 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-4752 / 9568f41 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13848/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13848/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13848/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Verify fairshare and minshare preemption
> 
>
> Key: YARN-5819
>

[jira] [Commented] (YARN-5849) Automatically create YARN control group for pre-mounted cgroups

2016-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652909#comment-15652909
 ] 

Hadoop QA commented on YARN-5849:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 25 unchanged - 23 fixed = 25 total (was 48) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 0 new + 231 unchanged - 5 fixed = 231 total (was 236) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
39s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-5849 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838279/YARN-5849.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux aa42f1828b05 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 

[jira] [Commented] (YARN-5634) Simplify initialization/use of RouterPolicy via a RouterPolicyFacade

2016-11-09 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652899#comment-15652899
 ] 

Subru Krishnan commented on YARN-5634:
--

Thanks [~curino] for the patch. I looked at it, please find my comments below.

 General note: 
  * It looks like you have the intellij formatting set rather than the Hadoop 
standard. For e.g. : Unnecessary change to imports in 
{{FederationStateStoreFacade}}. I feel this is the root cause for the 
checkstyle warnings too.

 {{YarnConfiguration}}
  * For every new config, you need to add corresponding default in 
_yarn-default.xml_ or exclusions in {{TestYarnConfigurationFields}}. I prefer 
the latter for these configs.
  * I don't think we need DEFAULT_FEDERATION_POLICY_MANAGER_PARAMS (can we 
avoid FEDERATION_POLICY_MANAGER_PARAMS itself) in the interest of trying to 
limit the _configuration_ explosion.

 {{RouterPolicyFacade}}
  * {code}  String defaultPolicyParamString = 
conf.get(YarnConfiguration.DEFAULT_FEDERATION_POLICY_MANAGER_PARAMS, 
YarnConfiguration.DEFAULT_FEDERATION_POLICY_MANAGER_PARAMS); {code} should be 
{code} String defaultPolicyParamString = 
conf.get(YarnConfiguration.FEDERATION_POLICY_MANAGER_PARAMS, 
YarnConfiguration.DEFAULT_FEDERATION_POLICY_MANAGER_PARAMS); {code} 
  * Please use *Charset.defaultCharset()* instead of hard-coding (also existing 
usages like in {{WeightedPolicyInfo}}.
  * I feel it's better if we add the the _fallbackPolicyManager_ to the cache 
during initialization so that we can avoid costly 
*fallbackPolicyManager::getRouterPolicy* in every invocation of 
*getHomeSubcluster*.
  * We should have a warning log with context that we were not able to retrieve 
the policy configuration for the input queue.
  * The _if_ clause can be rephrased as {code} cachedConfs.containsKey(queue) 
&& !cachedConfs.get(queue).equals(configuration) {code}.

{{FederationStateStoreFacade}}
  * We should specify for what _queue_, we got null from StateStore and 
possibly log it.

{{TestFederationPolicyFacade}}
  * Overall the coverage  is nice! The only feedback is it took me quite some 
time to figure out how the configuration was getting updated in 
*testConfigurationUpdate*. IIUC, it's done implicitly through 
{{FederationPoliciesTestUtil::initFacade}}? If so, can we do it explicitly as 
that will help considerably in readability and moreover none of the other tests 
use it and might even be a undesirable side-effect.
  * Nit: can we rename *testgetHomeSubcluster* to *testGetHomeSubcluster* and 
move it up to the first test.


> Simplify initialization/use of RouterPolicy via a RouterPolicyFacade 
> -
>
> Key: YARN-5634
> URL: https://issues.apache.org/jira/browse/YARN-5634
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>  Labels: oct16-medium
> Attachments: YARN-5634-YARN-2915.01.patch, 
> YARN-5634-YARN-2915.02.patch
>
>
> The current set of policies require some machinery to (re)initialize based on 
> changes in the SubClusterPolicyConfiguration. This JIRA tracks the effort to 
> hide much of that behind a simple RouterPolicyFacade, making lifecycle and 
> usage of the policies easier to consumers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3955) Support for priority ACLs in CapacityScheduler

2016-11-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652791#comment-15652791
 ] 

Jian He commented on YARN-3955:
---

thanks Sunil, some comments for the patch:

- PriorityACLConfiguration#getPrioirityAcl typo
- I think we may not need a public API record PriorityACL as it's not supposed 
to be used by external application. May use AccessType#SUBMIT_APP_PRIORITY 
directly
- AccessType#SUBMIT_APP_PRIORITY, maybe call it ACCESS_PRIORITY ?
- make the log message clear that "user does not have permission to change 
priority" ?
{code}
  RMAuditLogger.logFailure(callerUGI.getShortUserName(),
  AuditConstants.UPDATE_APP_PRIORITY,
  "User doesn't have permissions to "
  + ApplicationAccessType.MODIFY_APP.toString(),
  "ClientRMService", AuditConstants.UNAUTHORIZED_USER, applicationId);
  throw RPCUtil.getRemoteException(new AccessControlException("User "
  + callerUGI.getShortUserName() + " cannot perform operation "
  + ApplicationAccessType.MODIFY_APP.name() + " on " + applicationId));
{code}
- I think readLock is not needed, the field itself is not changing.
{code}
try {
  readLock.lock();
  return priorityAcls;
} finally {
  readLock.unlock()
{code}
- could you add sample configurations in capacity-scheduler.xml, and add 
comments about what the syntax is for the config ?
- the priority comes from submissionContext can be an arbitrary value, it's not 
guarated that "csqueue.getPriorityPrivilegedEntity(priority)" will always 
return the object?
{code}
  Priority priority = submissionContext.getPriority();
  if (null != csqueue) {
if ((!authorizer.checkPermission(
new AccessRequest(csqueue.getPrivilegedEntity(), userUgi,
SchedulerUtils.toAccessType(QueueACL.SUBMIT_APPLICATIONS),
applicationId.toString(), appName,
Server.getRemoteAddress(), null))
&& !authorizer.checkPermission(
new AccessRequest(csqueue.getPrivilegedEntity(), userUgi,
SchedulerUtils.toAccessType(QueueACL.ADMINISTER_QUEUE),
applicationId.toString(), appName,
Server.getRemoteAddress(), null)))
|| !authorizer.checkPermission(new AccessRequest(
csqueue.getPriorityPrivilegedEntity(priority), userUgi,
{code}
- I feel, adding every possible priority at every integer for a queue into the 
ConfiguredYarnAuthorizer is cumbersome. One solution in my mind is to create a 
new QueueEntity which overridds PrivilegedEntity, and we add a max-priority 
field in the new object. On checkPermission, we check whether the accessType is 
accessPriority, if it is, check if the priority is less than the max-priority. 
AccessRequest also needs to add a new priority field.

> Support for priority ACLs in CapacityScheduler
> --
>
> Key: YARN-3955
> URL: https://issues.apache.org/jira/browse/YARN-3955
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: ApplicationPriority-ACL.pdf, 
> ApplicationPriority-ACLs-v2.pdf, YARN-3955.v0.patch, YARN-3955.v1.patch, 
> YARN-3955.wip1.patch
>
>
> Support will be added for User-level access permission to use different 
> application-priorities. This is to avoid situations where all users try 
> running max priority in the cluster and thus degrading the value of 
> priorities.
> Access Control Lists can be set per priority level within each queue. Below 
> is an example configuration that can be added in capacity scheduler 
> configuration
> file for each Queue level.
> yarn.scheduler.capacity.root...acl=user1,user2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5819) Verify fairshare and minshare preemption

2016-11-09 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652786#comment-15652786
 ] 

Karthik Kambatla edited comment on YARN-5819 at 11/10/16 2:31 AM:
--

Patch v3 incorporates most recent review feedback except the following bits.

bq. In FSPreemptionThread.run(), the call to 
containerNode.removeContainerForPreemption(container) is pretty important to 
keep the FSSchedulerNode state sane. Do you need to do anything to make sure it 
doesn't get skipped because of an exception? In general, I'm a little nervous 
about the bookkeeping on the containers to preempt; it seems like something 
that could quietly get out of sync and cause issues.

Very valid point. The root of this is splitting the logic of adding and 
removing containers on a node to be preempted across two threads - 
{{FSPreemptionThread}} marks them and {{PreemptContainersTask}} attempts 
preemption and removes them from the collection. The latter is triggered via 
{{FSPreemptionThread.run()}} -> {{preemptContainers()}} -> 
{{PreemptContainersTask.run()}}. In the current path, there is no 
exception-throwing code. However, one might add it in the future. To 
future-proof this, I could add a try-finally block to the task run method. That 
still leaves preemptContainers and I don't think a try-finally block is enough 
there.

Ideally, I would like to call out that these two methods cannot throw 
Exceptions. Would javadoc-ing that be enough? 

bq. {{FSAppAttempt.preemptedResources}} should be final, as should its 
neighbors.
The neighbors cannot be final the way they are today - we update the reference 
to point to different objects at different times.

bq. The import of org.junit.After in TestFairSchedulerPreemption is grouped 
wrong.
The spacing alone was wrong. My IDE retains alphabetic order irrespective of 
whether the import is static. I thought that was normal. Should I change 
anything there?

bq. {{TestFairSchedulerPreemption.writeAllocFile()}} never throws an 
IOException.
It actually does when we call {{new FileWriter()}}.


was (Author: kasha):
Patch v3 incorporates most recent review feedback except the following bits.

bq. In FSPreemptionThread.run(), the call to 
containerNode.removeContainerForPreemption(container) is pretty important to 
keep the FSSchedulerNode state sane. Do you need to do anything to make sure it 
doesn't get skipped because of an exception? In general, I'm a little nervous 
about the bookkeeping on the containers to preempt; it seems like something 
that could quietly get out of sync and cause issues.

Very valid point. The root of this is splitting the logic of adding and 
removing containers on a node to be preempted across two threads - 
{{FSPreemptionThread}} marks them and {{PreemptContainersTask}} attempts 
preemption and removes them from the collection. The latter is triggered via 
{{FSPreemptionThread.run()}} -> {{preemptContainers()}} -> 
{{PreemptContainersTask.run()}}. In the current path, there is no 
exception-throwing code. However, one might add it in the future. To 
future-proof this, I could add a try-finally block to the task run method. That 
still leaves preemptContainers and I don't think a try-finally block is enough 
there.

Ideally, I would like to call out that these two methods cannot throw 
Exceptions. Would javadoc-ing that be enough? 

bq. {{FSAppAttempt.preemptedResources}} should be final, as should its 
neighbors.
The neighbors cannot be the way they are today - we update the reference to 
point to different objects at different times.

bq. The import of org.junit.After in TestFairSchedulerPreemption is grouped 
wrong.
The spacing alone was wrong. My IDE retains alphabetic order irrespective of 
whether the import is static. I thought that was normal. Should I change 
anything there?

bq. {{TestFairSchedulerPreemption.writeAllocFile()}} never throws an 
IOException.
It actually does when we call {{new FileWriter()}}.

> Verify fairshare and minshare preemption
> 
>
> Key: YARN-5819
> URL: https://issues.apache.org/jira/browse/YARN-5819
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-5819.YARN-4752.1.patch, 
> yarn-5819.YARN-4752.2.patch, yarn-5819.YARN-4752.3.patch
>
>
> JIRA to track the unit test(s) verifying both fairshare and minshare 
> preemption. The tests should verify:
> # preemption within a single leaf queue
> # preemption between sibling leaf queues
> # preemption between non-sibling leaf queues
> # {{allowPreemption = false}} should disallow preemption from a queue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-5819) Verify fairshare and minshare preemption

2016-11-09 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652786#comment-15652786
 ] 

Karthik Kambatla commented on YARN-5819:


Patch v3 incorporates most recent review feedback except the following bits.

bq. In FSPreemptionThread.run(), the call to 
containerNode.removeContainerForPreemption(container) is pretty important to 
keep the FSSchedulerNode state sane. Do you need to do anything to make sure it 
doesn't get skipped because of an exception? In general, I'm a little nervous 
about the bookkeeping on the containers to preempt; it seems like something 
that could quietly get out of sync and cause issues.

Very valid point. The root of this is splitting the logic of adding and 
removing containers on a node to be preempted across two threads - 
{{FSPreemptionThread}} marks them and {{PreemptContainersTask}} attempts 
preemption and removes them from the collection. The latter is triggered via 
{{FSPreemptionThread.run()}} -> {{preemptContainers()}} -> 
{{PreemptContainersTask.run()}}. In the current path, there is no 
exception-throwing code. However, one might add it in the future. To 
future-proof this, I could add a try-finally block to the task run method. That 
still leaves preemptContainers and I don't think a try-finally block is enough 
there.

Ideally, I would like to call out that these two methods cannot throw 
Exceptions. Would javadoc-ing that be enough? 

bq. {{FSAppAttempt.preemptedResources}} should be final, as should its 
neighbors.
The neighbors cannot be the way they are today - we update the reference to 
point to different objects at different times.

bq. The import of org.junit.After in TestFairSchedulerPreemption is grouped 
wrong.
The spacing alone was wrong. My IDE retains alphabetic order irrespective of 
whether the import is static. I thought that was normal. Should I change 
anything there?

bq. {{TestFairSchedulerPreemption.writeAllocFile()}} never throws an 
IOException.
It actually does when we call {{new FileWriter()}}.

> Verify fairshare and minshare preemption
> 
>
> Key: YARN-5819
> URL: https://issues.apache.org/jira/browse/YARN-5819
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-5819.YARN-4752.1.patch, 
> yarn-5819.YARN-4752.2.patch, yarn-5819.YARN-4752.3.patch
>
>
> JIRA to track the unit test(s) verifying both fairshare and minshare 
> preemption. The tests should verify:
> # preemption within a single leaf queue
> # preemption between sibling leaf queues
> # preemption between non-sibling leaf queues
> # {{allowPreemption = false}} should disallow preemption from a queue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5819) Verify fairshare and minshare preemption

2016-11-09 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5819:
---
Attachment: yarn-5819.YARN-4752.3.patch

> Verify fairshare and minshare preemption
> 
>
> Key: YARN-5819
> URL: https://issues.apache.org/jira/browse/YARN-5819
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-5819.YARN-4752.1.patch, 
> yarn-5819.YARN-4752.2.patch, yarn-5819.YARN-4752.3.patch
>
>
> JIRA to track the unit test(s) verifying both fairshare and minshare 
> preemption. The tests should verify:
> # preemption within a single leaf queue
> # preemption between sibling leaf queues
> # preemption between non-sibling leaf queues
> # {{allowPreemption = false}} should disallow preemption from a queue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5761) Separate QueueManager from Scheduler

2016-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652728#comment-15652728
 ] 

Hadoop QA commented on YARN-5761:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 29s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 10 new + 833 unchanged - 17 fixed = 843 total (was 850) 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 30s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-5761 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838275/YARN-5761.3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 218b852a21d8 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 71adf44 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/13847/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/13847/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/13847/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| checkstyle | 

[jira] [Updated] (YARN-5849) Automatically create YARN control group for pre-mounted cgroups

2016-11-09 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-5849:
-
Attachment: YARN-5849.003.patch

Fix checkstyle issues

> Automatically create YARN control group for pre-mounted cgroups
> ---
>
> Key: YARN-5849
> URL: https://issues.apache.org/jira/browse/YARN-5849
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-5849.000.patch, YARN-5849.001.patch, 
> YARN-5849.002.patch, YARN-5849.003.patch
>
>
> Yarn can be launched with linux-container-executor.cgroups.mount set to 
> false. It will search for the cgroup mount paths set up by the administrator 
> parsing the /etc/mtab file. You can also specify 
> resource.percentage-physical-cpu-limit to limit the CPU resources assigned to 
> containers.
> linux-container-executor.cgroups.hierarchy is the root of the settings of all 
> YARN containers. If this is specified but not created YARN will fail at 
> startup:
> Caused by: java.io.FileNotFoundException: 
> /cgroups/cpu/hadoop-yarn/cpu.cfs_period_us (Permission denied)
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler.updateCgroup(CgroupsLCEResourcesHandler.java:263)
> This JIRA is about automatically creating YARN control group in the case 
> above. It reduces the cost of administration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5761) Separate QueueManager from Scheduler

2016-11-09 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652664#comment-15652664
 ] 

Xuan Gong commented on YARN-5761:
-

Uploaded a new patch to address all the previous comments.

[~mshen] [~jhung] Could you take a look at this jira as well as 
https://issues.apache.org/jira/browse/YARN-5746 and 
https://issues.apache.org/jira/browse/YARN-5756. I am considering these three 
jiras as the pre-requirements for both YARN-5734 and YARN-5761. 

> Separate QueueManager from Scheduler
> 
>
> Key: YARN-5761
> URL: https://issues.apache.org/jira/browse/YARN-5761
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>  Labels: oct16-medium
> Attachments: YARN-5761.1.patch, YARN-5761.1.rebase.patch, 
> YARN-5761.2.patch, YARN-5761.3.patch
>
>
> Currently, in scheduler code, we are doing queue manager and scheduling work. 
> We'd better separate the queue manager out of scheduler logic. In that case, 
> it would be much easier and safer to extend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5761) Separate QueueManager from Scheduler

2016-11-09 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5761:

Attachment: YARN-5761.3.patch

> Separate QueueManager from Scheduler
> 
>
> Key: YARN-5761
> URL: https://issues.apache.org/jira/browse/YARN-5761
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>  Labels: oct16-medium
> Attachments: YARN-5761.1.patch, YARN-5761.1.rebase.patch, 
> YARN-5761.2.patch, YARN-5761.3.patch
>
>
> Currently, in scheduler code, we are doing queue manager and scheduling work. 
> We'd better separate the queue manager out of scheduler logic. In that case, 
> it would be much easier and safer to extend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5864) Capacity Scheduler preemption for fragmented cluster

2016-11-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5864:
-
Attachment: YARN-5864.poc-0.patch

> Capacity Scheduler preemption for fragmented cluster 
> -
>
> Key: YARN-5864
> URL: https://issues.apache.org/jira/browse/YARN-5864
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5864.poc-0.patch
>
>
> YARN-4390 added preemption for reserved container. However, we found one case 
> that large container cannot be allocated even if all queues are under their 
> limit.
> For example, we have:
> {code}
> Two queues, a and b, capacity 50:50 
> Two nodes: n1 and n2, each of them have 50 resource 
> Now queue-a uses 10 on n1 and 10 on n2
> queue-b asks for one single container with resource=45. 
> {code} 
> The container could be reserved on any of the host, but no preemption will 
> happen because all queues are under their limits. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5864) Capacity Scheduler preemption for fragmented cluster

2016-11-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652608#comment-15652608
 ] 

Wangda Tan commented on YARN-5864:
--

The problem in the description is hard  because it's hard clearly explain why a 
queue will be preempted even if a queue is within its limit.

So I'm proposing to solve one use case only: in some of our customer's 
configuration, we have separate queues for long running services, for example 
LLAP-queue for LLAP services. LLAP services will scale up and down depends on 
the workload, they will ask container with lots of resource to make sure hosts 
running LLAP daemons not used by other applications.

And we want to allocate containers for such LRS sooner when they have 
requirements to scale up.

There's one quick approach in my mind to handle the use case above: 
- Add a new preemption selector (which make sure this feature can be disabled 
by configuration)
- Add a white-list of queues for the new selection: Only queue in white list 
can preempt from other queues
- When a reserved container from white-list queue created beyond configured 
timeout, we will look at the node which reserves the container, and select 
container from non-whitelisted queue to preempt.

Thoughts and suggestions? [~curino], [~eepayne], [~sunilg].

Attached patch for review as well.

> Capacity Scheduler preemption for fragmented cluster 
> -
>
> Key: YARN-5864
> URL: https://issues.apache.org/jira/browse/YARN-5864
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> YARN-4390 added preemption for reserved container. However, we found one case 
> that large container cannot be allocated even if all queues are under their 
> limit.
> For example, we have:
> {code}
> Two queues, a and b, capacity 50:50 
> Two nodes: n1 and n2, each of them have 50 resource 
> Now queue-a uses 10 on n1 and 10 on n2
> queue-b asks for one single container with resource=45. 
> {code} 
> The container could be reserved on any of the host, but no preemption will 
> happen because all queues are under their limits. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5864) Capacity Scheduler preemption for fragmented cluster

2016-11-09 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-5864:


 Summary: Capacity Scheduler preemption for fragmented cluster 
 Key: YARN-5864
 URL: https://issues.apache.org/jira/browse/YARN-5864
 Project: Hadoop YARN
  Issue Type: New Feature
Reporter: Wangda Tan
Assignee: Wangda Tan


YARN-4390 added preemption for reserved container. However, we found one case 
that large container cannot be allocated even if all queues are under their 
limit.

For example, we have:
{code}
Two queues, a and b, capacity 50:50 
Two nodes: n1 and n2, each of them have 50 resource 
Now queue-a uses 10 on n1 and 10 on n2
queue-b asks for one single container with resource=45. 
{code} 

The container could be reserved on any of the host, but no preemption will 
happen because all queues are under their limits. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652550#comment-15652550
 ] 

Jian He commented on YARN-5611:
---

Committed to trunk, branch-2, thanks Rohith !
Thanks Sunil for helping the reviews !



> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-hard
> Attachments: 0001-YARN-5611.patch, 0002-YARN-5611.patch, 
> 0003-YARN-5611.patch, YARN-5611.0004.patch, YARN-5611.0005.patch, 
> YARN-5611.0006.patch, YARN-5611.0007.patch, YARN-5611.0008.patch, 
> YARN-5611.0009.patch, YARN-5611.0010.patch, YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5856) Unnecessary duplicate start container request sent to NM State store

2016-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652530#comment-15652530
 ] 

Hudson commented on YARN-5856:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10807 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10807/])
YARN-5856. Unnecessary duplicate start container request sent to NM 
(naganarasimha_gr: rev de3a5f8d08f64d0c2021a84b40e63e716da2321c)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java


> Unnecessary duplicate start container request sent to NM State store
> 
>
> Key: YARN-5856
> URL: https://issues.apache.org/jira/browse/YARN-5856
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5856.01.patch
>
>
> In ContainerManagerImpl#startContainerInternal, a duplicate store container 
> request is sent to NM State store which is unnecessary.
> {code}
> this.context.getNMStateStore().storeContainer(containerId,
> containerTokenIdentifier.getVersion(), request);
> dispatcher.getEventHandler().handle(
>   new ApplicationContainerInitEvent(container));
> this.context.getNMStateStore().storeContainer(containerId,
> containerTokenIdentifier.getVersion(), request);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652489#comment-15652489
 ] 

Hudson commented on YARN-5611:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10806 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10806/])
YARN-5611. Provide an API to update lifetime of an application. (jianhe: rev 
bcc15c6290b3912a054323695a6a931b0de163bd)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/UpdateApplicationTimeoutsRequest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/proto/yarn_server_resourcemanager_recovery.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestClientRedirect.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/AbstractLivelinessMonitor.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/UpdateApplicationTimeoutsResponsePBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationClientProtocol.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/Times.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/monitor/RMAppLifetimeMonitor.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/UpdateApplicationTimeoutsRequestPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/MockRMApp.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/ApplicationStateData.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateUpdateAppEvent.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/MockResourceManagerFacade.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/impl/pb/client/ApplicationClientProtocolPBClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestApplicationLifetimeMonitor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/impl/pb/ApplicationStateDataPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/applicationclient_protocol.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/MockAsm.java
* (edit) 

[jira] [Commented] (YARN-5856) Unnecessary duplicate start container request sent to NM State store

2016-11-09 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652480#comment-15652480
 ] 

Naganarasimha G R commented on YARN-5856:
-

Thanks for the contribution [~varun_saxena], Have committed it to trunk!

> Unnecessary duplicate start container request sent to NM State store
> 
>
> Key: YARN-5856
> URL: https://issues.apache.org/jira/browse/YARN-5856
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5856.01.patch
>
>
> In ContainerManagerImpl#startContainerInternal, a duplicate store container 
> request is sent to NM State store which is unnecessary.
> {code}
> this.context.getNMStateStore().storeContainer(containerId,
> containerTokenIdentifier.getVersion(), request);
> dispatcher.getEventHandler().handle(
>   new ApplicationContainerInitEvent(container));
> this.context.getNMStateStore().storeContainer(containerId,
> containerTokenIdentifier.getVersion(), request);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5820) yarn node CLI help should be clearer

2016-11-09 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652452#comment-15652452
 ] 

Naganarasimha G R commented on YARN-5820:
-

[~ajithshetty], is it really required to use HelpFormatter for all the lines? 
can we avoid for the first line ?

> yarn node CLI help should be clearer
> 
>
> Key: YARN-5820
> URL: https://issues.apache.org/jira/browse/YARN-5820
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 2.6.0
>Reporter: Grant Sohn
>Assignee: Ajith S
>Priority: Trivial
> Attachments: YARN-5820.01.patch, YARN-5820.02.patch, 
> YARN-5820.03.patch, YARN-5820.04.patch
>
>
> Current message is:
> {noformat}
> usage: node
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> It should be either this:
> {noformat}
> usage: yarn node [-list [-states |-all] | -status ]
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> or that.
> {noformat}
> usage: yarn node -list [-states |-all] 
>yarn node -status 
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> The latter is the least ambiguous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4498) Application level node labels stats to be available in REST

2016-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652447#comment-15652447
 ] 

Hudson commented on YARN-4498:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10805 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10805/])
YARN-4498. Application level node labels stats to be available in REST 
(naganarasimha_gr: rev edbee9e609e7f31d188660717ff9d3fb9f606abb)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServiceAppsNodelabel.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/AppInfo.java


> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.addendum.001.patch, 
> YARN-4498.branch-2.8.0001.patch, YARN-4498.branch-2.8.addendum.001.patch, 
> apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-11-09 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652445#comment-15652445
 ] 

Naganarasimha G R commented on YARN-5545:
-

Any updates on this?

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Attachments: YARN-5545.0001.patch, YARN-5545.0002.patch, 
> YARN-5545.0003.patch, YARN-5545.0005.patch, YARN-5545.0006.patch, 
> YARN-5545.004.patch, capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:286)
>   at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:296)
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:301)
>   ... 25 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5774) MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set yarn.scheduler.minimum-allocation-mb to 0.

2016-11-09 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652426#comment-15652426
 ] 

Yufei Gu commented on YARN-5774:


Thanks [~templedf] for the review. 
In CS, minimum share will serve as an increment share and it cannot be zero, 
which is guaranteed by the CS sanity check. FIFO doesn't have sanity check. I 
guess it is because of nobody cares about it. We can definitely add sanity 
check for FIFO scheduler in this JIRA or followup JIRA. So that's fine for CS, 
FIFO and FS.
The real tricky part is in common parts of scheduler(or RM). People who write 
the code in common parts might not even notice there is an increment share 
config because CS and FIFO don't have it and FS has it. That is how the issue 
happens in the very beginning.
This patch let {{normalize()}} throw a runtime exception if increment is 0, 
which no need to catch and handle. It will fail the RM when it happens. The 
main reason is that we should consider 0 increment as an invalid configuration 
according to the offline discussion with [~kasha].


> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set 
> yarn.scheduler.minimum-allocation-mb to 0.
> 
>
> Key: YARN-5774
> URL: https://issues.apache.org/jira/browse/YARN-5774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: oct16-easy
> Attachments: YARN-5774.001.patch, YARN-5774.002.patch, 
> YARN-5774.003.patch, YARN-5774.004.patch
>
>
> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler 
> because there is no resource request for the AM. This happened when you 
> configure {{yarn.scheduler.minimum-allocation-mb}} to zero.
> The problem is in the code used by both Capacity Scheduler and Fair 
> Scheduler. {{scheduler.increment-allocation-mb}} is a concept in FS, but not 
> CS. So the common code in class RMAppManager passes the 
> {{yarn.scheduler.minimum-allocation-mb}} as incremental one because there is 
> no incremental one for CS when it tried to normalize the resource requests.
> {code}
>  SchedulerUtils.normalizeRequest(amReq, scheduler.getResourceCalculator(),
>   scheduler.getClusterResource(),
>   scheduler.getMinimumResourceCapability(),
>   scheduler.getMaximumResourceCapability(),
>   scheduler.getMinimumResourceCapability());  --> incrementResource 
> should be passed here.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5849) Automatically create YARN control group for pre-mounted cgroups

2016-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652395#comment-15652395
 ] 

Hadoop QA commented on YARN-5849:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 45 unchanged - 4 fixed = 47 total (was 49) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
25s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-5849 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838243/YARN-5849.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7649cbd59489 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 59ee8b7 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (YARN-5184) Fix up incompatible changes introduced on ContainerStatus and NodeReport

2016-11-09 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652388#comment-15652388
 ] 

Sangjin Lee commented on YARN-5184:
---

Hmm, I definitely understand the desire to minimize incompatibilities even for 
major versions. On the other hand, it might not be best if we say zero 
incompatibilities even for a major version upgrade. That would severely limit 
the ability to make more substantial changes to deliver useful features and get 
rid of tech debt, and so on. I think you would agree that there needs to be a 
happy medium here.

I don't have a strong opinion on this specific one for 3.0. If you feel 
strongly that this would be a major issue for 3.0, I'd be happy to keep the 
default methods. Do let me know, and I will update the patches and this JIRA. 
Thanks!

> Fix up incompatible changes introduced on ContainerStatus and NodeReport
> 
>
> Key: YARN-5184
> URL: https://issues.apache.org/jira/browse/YARN-5184
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 2.8.0, 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Sangjin Lee
>Priority: Blocker
> Attachments: YARN-5184-branch-2.8.poc.patch, 
> YARN-5184-branch-2.poc.patch
>
>
> YARN-2882 and YARN-5430 broke compatibility by adding abstract methods to 
> ContainerStatus. Since ContainerStatus is a Public-Stable class, adding 
> abstract methods to this class breaks any extensions. 
> To fix this, we should add default implementations to these new methods and 
> not leave them as abstract. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5725) Test uncaught exception in TestContainersMonitorResourceChange.testContainersResourceChange when setting IP and host

2016-11-09 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-5725:
-
Attachment: YARN-5725.010.patch

Thank you for the comments, [~templedf]! I added the requested fixes and 
included your code snippet as requested.

> Test uncaught exception in 
> TestContainersMonitorResourceChange.testContainersResourceChange when setting 
> IP and host
> 
>
> Key: YARN-5725
> URL: https://issues.apache.org/jira/browse/YARN-5725
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
>  Labels: oct16-easy
> Attachments: YARN-5725.000.patch, YARN-5725.001.patch, 
> YARN-5725.002.patch, YARN-5725.003.patch, YARN-5725.004.patch, 
> YARN-5725.005.patch, YARN-5725.006.patch, YARN-5725.007.patch, 
> YARN-5725.008.patch, YARN-5725.009.patch, YARN-5725.010.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> The issue is a warning but it prevents container monitor to continue
> 2016-10-12 14:38:23,280 WARN  [Container Monitor] 
> monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(594)) - 
> Uncaught exception in ContainersMonitorImpl while monitoring resource of 
> container_123456_0001_01_01
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl$MonitoringThread.run(ContainersMonitorImpl.java:455)
> 2016-10-12 14:38:23,281 WARN  [Container Monitor] 
> monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(613)) - 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
>  is interrupted. Exiting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4329) Allow fetching exact reason as to why a submitted app is in ACCEPTED state in Fair Scheduler

2016-11-09 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652261#comment-15652261
 ] 

Yufei Gu commented on YARN-4329:


Thanks [~templedf] for the review and commit! Thanks [~Naganarasimha] for the 
review.

> Allow fetching exact reason as to why a submitted app is in ACCEPTED state in 
> Fair Scheduler
> 
>
> Key: YARN-4329
> URL: https://issues.apache.org/jira/browse/YARN-4329
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler, resourcemanager
>Reporter: Naganarasimha G R
>Assignee: Yufei Gu
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: Screen Shot 2016-10-18 at 3.13.59 PM.png, 
> YARN-4329.001.patch, YARN-4329.002.patch, YARN-4329.003.patch, 
> YARN-4329.004.patch, YARN-4329.005.patch
>
>
> Similar to YARN-3946, it would be useful to capture possible reason why the 
> Application is in accepted state in FairScheduler



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5849) Automatically create YARN control group for pre-mounted cgroups

2016-11-09 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-5849:
-
Attachment: YARN-5849.002.patch

I updated the patch with a better implementation based on your comments. 
CGroupsHandlerImpl used to have a single mountCGroupController function to 
handle the case when mount is enabled. With this change there will be a 
function called initializeCGroupController. It will call 
initializePreMountedCGroupController to make sure the group exist in the 
pre-mounted directories. Otherwise, it calls mountCGroupController, if we are 
dealing with a pre-mounted controller. All resource handlers are supposed to 
call initializeCGroupController as they do with mountCGroupController now. This 
way initializeCGroupController is called once only if the subsystem in question 
is enabled ensuring that we do not touch the cgroup mount points of external 
subsystems.

> Automatically create YARN control group for pre-mounted cgroups
> ---
>
> Key: YARN-5849
> URL: https://issues.apache.org/jira/browse/YARN-5849
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-5849.000.patch, YARN-5849.001.patch, 
> YARN-5849.002.patch
>
>
> Yarn can be launched with linux-container-executor.cgroups.mount set to 
> false. It will search for the cgroup mount paths set up by the administrator 
> parsing the /etc/mtab file. You can also specify 
> resource.percentage-physical-cpu-limit to limit the CPU resources assigned to 
> containers.
> linux-container-executor.cgroups.hierarchy is the root of the settings of all 
> YARN containers. If this is specified but not created YARN will fail at 
> startup:
> Caused by: java.io.FileNotFoundException: 
> /cgroups/cpu/hadoop-yarn/cpu.cfs_period_us (Permission denied)
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler.updateCgroup(CgroupsLCEResourcesHandler.java:263)
> This JIRA is about automatically creating YARN control group in the case 
> above. It reduces the cost of administration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-11-09 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652246#comment-15652246
 ] 

Konstantinos Karanasos commented on YARN-4597:
--

Hi [~jianhe]. Yes, I will check the patch today.

> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
>  Labels: oct16-hard
> Attachments: YARN-4597.001.patch, YARN-4597.002.patch, 
> YARN-4597.003.patch, YARN-4597.004.patch, YARN-4597.005.patch, 
> YARN-4597.006.patch, YARN-4597.007.patch, YARN-4597.008.patch, 
> YARN-4597.009.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-11-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652237#comment-15652237
 ] 

Jian He commented on YARN-4597:
---

the patch looks good to me, is [~kkaranasos] going to take a look ?

> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
>  Labels: oct16-hard
> Attachments: YARN-4597.001.patch, YARN-4597.002.patch, 
> YARN-4597.003.patch, YARN-4597.004.patch, YARN-4597.005.patch, 
> YARN-4597.006.patch, YARN-4597.007.patch, YARN-4597.008.patch, 
> YARN-4597.009.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4329) Allow fetching exact reason as to why a submitted app is in ACCEPTED state in Fair Scheduler

2016-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652148#comment-15652148
 ] 

Hudson commented on YARN-4329:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10804 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10804/])
YARN-4329. [YARN-5437] Allow fetching exact reason as to why a submitted 
(templedf: rev 59ee8b7a88603e94b5661a8d5d088f7aa99fe049)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestMaxRunningAppsEnforcer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/MaxRunningAppsEnforcer.java


> Allow fetching exact reason as to why a submitted app is in ACCEPTED state in 
> Fair Scheduler
> 
>
> Key: YARN-4329
> URL: https://issues.apache.org/jira/browse/YARN-4329
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler, resourcemanager
>Reporter: Naganarasimha G R
>Assignee: Yufei Gu
> Attachments: Screen Shot 2016-10-18 at 3.13.59 PM.png, 
> YARN-4329.001.patch, YARN-4329.002.patch, YARN-4329.003.patch, 
> YARN-4329.004.patch, YARN-4329.005.patch
>
>
> Similar to YARN-3946, it would be useful to capture possible reason why the 
> Application is in accepted state in FairScheduler



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5437) [Umbrella] Add more debug/diagnostic messages to scheduler

2016-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652150#comment-15652150
 ] 

Hudson commented on YARN-5437:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10804 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10804/])
YARN-4329. [YARN-5437] Allow fetching exact reason as to why a submitted 
(templedf: rev 59ee8b7a88603e94b5661a8d5d088f7aa99fe049)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestMaxRunningAppsEnforcer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/MaxRunningAppsEnforcer.java


> [Umbrella] Add more debug/diagnostic messages to scheduler
> --
>
> Key: YARN-5437
> URL: https://issues.apache.org/jira/browse/YARN-5437
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Wangda Tan
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4329) Allow fetching exact reason as to why a submitted app is in ACCEPTED state in Fair Scheduler

2016-11-09 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652011#comment-15652011
 ] 

Daniel Templeton commented on YARN-4329:


+1  I'll commit the latest patch shortly.

> Allow fetching exact reason as to why a submitted app is in ACCEPTED state in 
> Fair Scheduler
> 
>
> Key: YARN-4329
> URL: https://issues.apache.org/jira/browse/YARN-4329
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler, resourcemanager
>Reporter: Naganarasimha G R
>Assignee: Yufei Gu
> Attachments: Screen Shot 2016-10-18 at 3.13.59 PM.png, 
> YARN-4329.001.patch, YARN-4329.002.patch, YARN-4329.003.patch, 
> YARN-4329.004.patch, YARN-4329.005.patch
>
>
> Similar to YARN-3946, it would be useful to capture possible reason why the 
> Application is in accepted state in FairScheduler



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5774) MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set yarn.scheduler.minimum-allocation-mb to 0.

2016-11-09 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651964#comment-15651964
 ] 

Daniel Templeton commented on YARN-5774:


bq. So if users misconfigure the increment resource in fair scheduler, a 
detailed error message will show up.

That's great for fair scheduler, but what about CS and FIFO?  I'm particularly 
worried about those because an increment of 0 was not previously treated as 
invalid.  What's the intent of throwing the exception in {{normalize()}}?  You 
throw an exception when you want to halt the execution flow and allow some 
out-of-sequence remedial code to run.  In this case, no one is expecting to see 
this exception, so no one will catch it, and there's no action that needs to be 
taken in the CS and FIFO cases.  The net result is that things will fail in 
indirect ways.  Since it's not a failure for CS and FIFO, I don't think you 
should throw the exception.  In the case of FS, as you point out, the only 
reason to hit the exception is misuse of the resource calculator.

> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set 
> yarn.scheduler.minimum-allocation-mb to 0.
> 
>
> Key: YARN-5774
> URL: https://issues.apache.org/jira/browse/YARN-5774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: oct16-easy
> Attachments: YARN-5774.001.patch, YARN-5774.002.patch, 
> YARN-5774.003.patch, YARN-5774.004.patch
>
>
> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler 
> because there is no resource request for the AM. This happened when you 
> configure {{yarn.scheduler.minimum-allocation-mb}} to zero.
> The problem is in the code used by both Capacity Scheduler and Fair 
> Scheduler. {{scheduler.increment-allocation-mb}} is a concept in FS, but not 
> CS. So the common code in class RMAppManager passes the 
> {{yarn.scheduler.minimum-allocation-mb}} as incremental one because there is 
> no incremental one for CS when it tried to normalize the resource requests.
> {code}
>  SchedulerUtils.normalizeRequest(amReq, scheduler.getResourceCalculator(),
>   scheduler.getClusterResource(),
>   scheduler.getMinimumResourceCapability(),
>   scheduler.getMaximumResourceCapability(),
>   scheduler.getMinimumResourceCapability());  --> incrementResource 
> should be passed here.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651938#comment-15651938
 ] 

Hadoop QA commented on YARN-4597:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
29s{color} | {color:red} 
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/findbugsXml.xml)
 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
24s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 58s{color} | {color:orange} root: The patch generated 15 new + 1053 
unchanged - 16 fixed = 1068 total (was 1069) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | {color:red} 
patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/findbugsXml.xml)
 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 0 new + 235 unchanged - 1 fixed = 235 total (was 236) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-yarn-server-tests in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-mapreduce-client-jobclient in 

[jira] [Commented] (YARN-5862) TestDiskFailures.testLocalDirsFailures failed

2016-11-09 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651876#comment-15651876
 ] 

Yufei Gu commented on YARN-5862:


Thanks [~varun_saxena] for the review. The findbugs and failed unit tests are 
unrelated.

> TestDiskFailures.testLocalDirsFailures failed
> -
>
> Key: YARN-5862
> URL: https://issues.apache.org/jira/browse/YARN-5862
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5862.001.patch
>
>
> {code}
> java.util.NoSuchElementException: null 
> at 
> java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
>  
> at 
> java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
>  
> at 
> org.apache.hadoop.yarn.server.TestDiskFailures.verifyDisksHealth(TestDiskFailures.java:247)
>  
> at 
> org.apache.hadoop.yarn.server.TestDiskFailures.testDirsFailures(TestDiskFailures.java:179)
>  
> at 
> org.apache.hadoop.yarn.server.TestDiskFailures.testLocalDirsFailures(TestDiskFailures.java:99)
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5725) Test uncaught exception in TestContainersMonitorResourceChange.testContainersResourceChange when setting IP and host

2016-11-09 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651833#comment-15651833
 ] 

Daniel Templeton commented on YARN-5725:


Thanks for humoring me on refactoring the _if_.  In it's current form it's 
plain to see there are some issues.  What about something more like:

{code}
  Container container = context.getContainers().get(containerId);

  if (container != null) {
String[] ipAndHost = containerExecutor.getIpAndHost(container);

if ((ipAndHost != null) && (ipAndHost[0] != null) &&
(ipAndHost[1] != null)) {
  container.setIpAndHost(ipAndHost);
  LOG.info(containerId + "'s ip = " + ipAndHost[0]
  + ", and hostname = " + ipAndHost[1]);
} else {
  LOG.info("Can not get both ip and hostname: "
  + Arrays.toString(ipAndHost));
}
  } else {
LOG.info(containerId + " is missing. Not setting ip and "
+ "hostname");
  }
{code}

Notice that I changed your added log message to INFO level because it's not 
something the admin can or should do anything about.  I also moved the {{+}} 
for string catting to the beginning of the line, and I fixed the line 
continuation indentation.  (Sorry.  It's easier to give you fixed code than to 
explain how to fix it.)

Your log message in {{reportResourceUsage()}} should also be INFO.

In the test code, let's not do a compound conditional in a _for_ loop.  Either 
add an _if-break_ for the second condition or switch to a _while_.

> Test uncaught exception in 
> TestContainersMonitorResourceChange.testContainersResourceChange when setting 
> IP and host
> 
>
> Key: YARN-5725
> URL: https://issues.apache.org/jira/browse/YARN-5725
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
>  Labels: oct16-easy
> Attachments: YARN-5725.000.patch, YARN-5725.001.patch, 
> YARN-5725.002.patch, YARN-5725.003.patch, YARN-5725.004.patch, 
> YARN-5725.005.patch, YARN-5725.006.patch, YARN-5725.007.patch, 
> YARN-5725.008.patch, YARN-5725.009.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> The issue is a warning but it prevents container monitor to continue
> 2016-10-12 14:38:23,280 WARN  [Container Monitor] 
> monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(594)) - 
> Uncaught exception in ContainersMonitorImpl while monitoring resource of 
> container_123456_0001_01_01
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl$MonitoringThread.run(ContainersMonitorImpl.java:455)
> 2016-10-12 14:38:23,281 WARN  [Container Monitor] 
> monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(613)) - 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
>  is interrupted. Exiting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651830#comment-15651830
 ] 

Hadoop QA commented on YARN-5611:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
35s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 53s{color} | {color:orange} root: The patch generated 15 new + 765 unchanged 
- 3 fixed = 780 total (was 768) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
37s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
58s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 37m 41s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 32s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
54s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}253m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMAdminService |
|   | hadoop.mapred.pipes.TestPipeApplication |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-5611 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838185/YARN-5611.0010.patch |
| Optional Tests |  asflicense 

[jira] [Commented] (YARN-5849) Automatically create YARN control group for pre-mounted cgroups

2016-11-09 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651809#comment-15651809
 ] 

Miklos Szegedi commented on YARN-5849:
--

I see now what you mean. The way the code behaves without mount before my patch 
is the following. When we start initializing the resource chain, we look for 
the yarn cgroup in every(!) controller and throw an exception and exit if the 
yarn group does not exist or not writable, regardless of whether it is enabled 
or not.
{code}
  public enum CGroupController {
CPU("cpu"),
NET_CLS("net_cls"),
BLKIO("blkio"),
MEMORY("memory");
...  
  }
...
  for (CGroupController controller : CGroupController.values()) {
String name = controller.getName();
String controllerPath = findControllerInMtab(name, parsedMtab);

if (controllerPath != null) {
  File f = new File(controllerPath + "/" + cGroupPrefix);

  if (FileUtil.canWrite(f)) {
ret.put(controller, controllerPath);
  } else {
String error =
new StringBuffer("Mount point Based on mtab file: ")
.append(mtab)
.append(". Controller mount point not writable for: ")
.append(name).toString();

LOG.error(error);
throw new ResourceHandlerException(error);
  }
} else {
  LOG.warn("Controller not mounted but automount disabled: " + name);
}
  }
{code}
I will update my patch to avoid this and do the check and creation only, if the 
subsystem in question is enabled.

> Automatically create YARN control group for pre-mounted cgroups
> ---
>
> Key: YARN-5849
> URL: https://issues.apache.org/jira/browse/YARN-5849
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-5849.000.patch, YARN-5849.001.patch
>
>
> Yarn can be launched with linux-container-executor.cgroups.mount set to 
> false. It will search for the cgroup mount paths set up by the administrator 
> parsing the /etc/mtab file. You can also specify 
> resource.percentage-physical-cpu-limit to limit the CPU resources assigned to 
> containers.
> linux-container-executor.cgroups.hierarchy is the root of the settings of all 
> YARN containers. If this is specified but not created YARN will fail at 
> startup:
> Caused by: java.io.FileNotFoundException: 
> /cgroups/cpu/hadoop-yarn/cpu.cfs_period_us (Permission denied)
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler.updateCgroup(CgroupsLCEResourcesHandler.java:263)
> This JIRA is about automatically creating YARN control group in the case 
> above. It reduces the cost of administration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3955) Support for priority ACLs in CapacityScheduler

2016-11-09 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-3955:
--
Attachment: YARN-3955.v1.patch

Upgrading to a v1 patch with basic test cases.

> Support for priority ACLs in CapacityScheduler
> --
>
> Key: YARN-3955
> URL: https://issues.apache.org/jira/browse/YARN-3955
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: ApplicationPriority-ACL.pdf, 
> ApplicationPriority-ACLs-v2.pdf, YARN-3955.v0.patch, YARN-3955.v1.patch, 
> YARN-3955.wip1.patch
>
>
> Support will be added for User-level access permission to use different 
> application-priorities. This is to avoid situations where all users try 
> running max priority in the cluster and thus degrading the value of 
> priorities.
> Access Control Lists can be set per priority level within each queue. Below 
> is an example configuration that can be added in capacity scheduler 
> configuration
> file for each Queue level.
> yarn.scheduler.capacity.root...acl=user1,user2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5823) Update NMTokens in case of requests with only opportunistic containers

2016-11-09 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651784#comment-15651784
 ] 

Konstantinos Karanasos commented on YARN-5823:
--

Thanks for the review and the commit, [~asuresh]!

> Update NMTokens in case of requests with only opportunistic containers
> --
>
> Key: YARN-5823
> URL: https://issues.apache.org/jira/browse/YARN-5823
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>Priority: Blocker
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5823.001.patch, YARN-5823.002.patch, 
> YARN-5823.003.patch, YARN-5823.004.patch
>
>
> At the moment, when an {{AllocateRequest}} contains only opportunistic 
> {{ResourceRequests}}, the updated NMTokens are not properly added to the 
> {{AllocateResponse}}.
> In such a case the AM does not get back the needed NMTokens that are required 
> to start the opportunistic containers at the respective nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5833) Add validation to ensure default ports are unique in Configuration

2016-11-09 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651781#comment-15651781
 ] 

Konstantinos Karanasos commented on YARN-5833:
--

Thanks [~asuresh], indeed those tests were unfortunately not kicked off by 
Jenkins...

> Add validation to ensure default ports are unique in Configuration
> --
>
> Key: YARN-5833
> URL: https://issues.apache.org/jira/browse/YARN-5833
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5833.003.addendum-2.patch, 
> YARN-5833.003.addendum.patch, YARN-5833.003.patch, YARN-5883.001.patch, 
> YARN-5883.002.patch
>
>
> The default port for the AMRMProxy coincides with the one for the Collector 
> Service (port 8048). Will use a different port for the AMRMProxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5819) Verify fairshare and minshare preemption

2016-11-09 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651747#comment-15651747
 ] 

Daniel Templeton commented on YARN-5819:


bq. Is there a good reason to not use for-loops here?

It doesn't pass my smell test.  I wasn't able to turn up any referenceable 
condemnation of what you've done.  And my suggested alternative doesn't work, 
because the expression has to be an {{Iterable}}, not an {{Iterator}}.  I don't 
have any argument left, other than to appeal to your humanity; think about the 
children.

In any case, your {{queue}} variable hides a field from 
{{SchedulerApplicationAttempt}}.

In {{FSPreemptionThread.run()}}, the call to 
{{containerNode.removeContainerForPreemption(container)}} is pretty important 
to keep the {{FSSchedulerNode}} state sane.  Do you need to do anything to make 
sure it doesn't get skipped because of an exception?  In general, I'm a little 
nervous about the bookkeeping on the containers to preempt; it seems like 
something that could quietly get out of sync and cause issues.

{{FSAppAttempt.preemptedResources}} should be final, as should its neighbors.

You need javadoc for the last three methods in {{FSSchedulerNode}}.

{{FSSchedulerNode.containersForPreemption}} should be final.

In {{FSSchedulerTestBase}}, some javadoc for {{rmNodes}} and {{addNode()}} 
would be helpful.

The import of {{org.junit.After}} in {{TestFairSchedulerPreemption}} is grouped 
wrong.

{{TestFairSchedulerPreemption.writeAllocFile()}} never throws an 
{{IOException}}.

{{TestFairSchedulerPreemption.writeAllocFile()}} is now every bit as ugly as I 
feared it would be.  What if you pull out the two different conditionals into 
separate functions, like:

{code}
  private void writeAllocFile() {
...
out.println("");

out.println("");

writePreemptionParams(out);

// Child-1
out.println("");
writeResourceParams(out);
out.println("");

// Child-2
out.println("");
writeResourceParams(out);
out.println("");

out.println(""); // end of preemptable queue

// Queue with preemption disallowed
out.println("");
out.println("false" +
"");

writePreemptionParams(out);

// Child-1
out.println("");
writeResourceParams(out);
out.println("");

// Child-2
out.println("");
writeResourceParams(out);
out.println("");

out.println(""); // end of nonpreemptable queue

out.println("");
...
  }

  private void writePreemptionParams(PrintWriter out) {
if (fairsharePreemption) {
  out.println("1" +
  "");
  out.println("0" +
  "");
} else {
  out.println("0" +
  "");
}
  }

  private void writeResourceParams(PrintWriter out) {
if (!fairsharePreemption) {
  out.println("4096mb,4vcores");
}
  }
{code}

Happy medium?

> Verify fairshare and minshare preemption
> 
>
> Key: YARN-5819
> URL: https://issues.apache.org/jira/browse/YARN-5819
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-5819.YARN-4752.1.patch, yarn-5819.YARN-4752.2.patch
>
>
> JIRA to track the unit test(s) verifying both fairshare and minshare 
> preemption. The tests should verify:
> # preemption within a single leaf queue
> # preemption between sibling leaf queues
> # preemption between non-sibling leaf queues
> # {{allowPreemption = false}} should disallow preemption from a queue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-1593) support out-of-proc AuxiliaryServices

2016-11-09 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651723#comment-15651723
 ] 

Hitesh Shah edited comment on YARN-1593 at 11/9/16 6:55 PM:


[~vvasudev] One question on the design doc. The doc does not seem to cover how 
user applications can define dependencies on these system services. For 
example, how to ensure that an MR/Tez/xyz container that requires the shuffle 
service does not get launched on a node where the system service is not 
running. This has 2 aspects - firstly how to ensure container allocations 
happen on correct nodes where these services are running and secondly, the 
service might be down when the container actually gets launched and therefore 
how the behavior will change as a result ( does the container eventually fail, 
does the NM itself stop the launch of the container and send an error back, 
etc).

Is this something that will be looked at later or should it be designed for 
from now itself to simplify the use of system services for user applications? 


was (Author: hitesh):
[~vvasudev] One question on the design doc. The doc does not seem to cover how 
user applications can define dependencies on these system services. For 
example, how to ensure that an MR/Tez/xyz container that requires the shuffle 
service does not get launched on a node where the system service is not 
running. This has 2 aspects - firstly how to ensure container allocations 
happen on correct nodes where these services are running and secondly, the 
service might be down when the container actually gets launched and therefore 
how the behavior will change as a result ( does the container eventually fail, 
does the NM itself stop the launch of the container and send an error back, 
etc).

> support out-of-proc AuxiliaryServices
> -
>
> Key: YARN-1593
> URL: https://issues.apache.org/jira/browse/YARN-1593
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, rolling upgrade
>Reporter: Ming Ma
>Assignee: Varun Vasudev
> Attachments: SystemContainersandSystemServices.pdf
>
>
> AuxiliaryServices such as ShuffleHandler currently run in the same process as 
> NM. There are some benefits to host them in dedicated processes.
> 1. NM rolling restart. If we want to upgrade YARN , NM restart will force the 
> ShuffleHandler restart. If ShuffleHandler runs as a separate process, 
> ShuffleHandler can continue to run during NM restart. NM can reconnect the 
> the running ShuffleHandler after restart.
> 2. Resource management. It is possible another type of AuxiliaryServices will 
> be implemented. AuxiliaryServices are considered YARN application specific 
> and could consume lots of resources. Running AuxiliaryServices in separate 
> processes allow easier resource management. NM could potentially stop a 
> specific AuxiliaryServices process from running if it consumes resource way 
> above its allocation.
> Here are some high level ideas:
> 1. NM provides a hosting process for each AuxiliaryService. Existing 
> AuxiliaryService API doesn't change.
> 2. The hosting process provides RPC server for AuxiliaryService proxy object 
> inside NM to connect to.
> 3. When we rolling restart NM, the existing AuxiliaryService processes will 
> continue to run. NM could reconnect to the running AuxiliaryService processes 
> upon restart.
> 4. Policy and resource management of AuxiliaryServices. So far we don't have 
> immediate need for this. AuxiliaryService could run inside a container and 
> its resource utilization could be taken into account by RM and RM could 
> consider a specific type of applications overutilize cluster resource.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1593) support out-of-proc AuxiliaryServices

2016-11-09 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651723#comment-15651723
 ] 

Hitesh Shah commented on YARN-1593:
---

[~vvasudev] One question on the design doc. The doc does not seem to cover how 
user applications can define dependencies on these system services. For 
example, how to ensure that an MR/Tez/xyz container that requires the shuffle 
service does not get launched on a node where the system service is not 
running. This has 2 aspects - firstly how to ensure container allocations 
happen on correct nodes where these services are running and secondly, the 
service might be down when the container actually gets launched and therefore 
how the behavior will change as a result ( does the container eventually fail, 
does the NM itself stop the launch of the container and send an error back, 
etc).

> support out-of-proc AuxiliaryServices
> -
>
> Key: YARN-1593
> URL: https://issues.apache.org/jira/browse/YARN-1593
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, rolling upgrade
>Reporter: Ming Ma
>Assignee: Varun Vasudev
> Attachments: SystemContainersandSystemServices.pdf
>
>
> AuxiliaryServices such as ShuffleHandler currently run in the same process as 
> NM. There are some benefits to host them in dedicated processes.
> 1. NM rolling restart. If we want to upgrade YARN , NM restart will force the 
> ShuffleHandler restart. If ShuffleHandler runs as a separate process, 
> ShuffleHandler can continue to run during NM restart. NM can reconnect the 
> the running ShuffleHandler after restart.
> 2. Resource management. It is possible another type of AuxiliaryServices will 
> be implemented. AuxiliaryServices are considered YARN application specific 
> and could consume lots of resources. Running AuxiliaryServices in separate 
> processes allow easier resource management. NM could potentially stop a 
> specific AuxiliaryServices process from running if it consumes resource way 
> above its allocation.
> Here are some high level ideas:
> 1. NM provides a hosting process for each AuxiliaryService. Existing 
> AuxiliaryService API doesn't change.
> 2. The hosting process provides RPC server for AuxiliaryService proxy object 
> inside NM to connect to.
> 3. When we rolling restart NM, the existing AuxiliaryService processes will 
> continue to run. NM could reconnect to the running AuxiliaryService processes 
> upon restart.
> 4. Policy and resource management of AuxiliaryServices. So far we don't have 
> immediate need for this. AuxiliaryService could run inside a container and 
> its resource utilization could be taken into account by RM and RM could 
> consider a specific type of applications overutilize cluster resource.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-1593) support out-of-proc AuxiliaryServices

2016-11-09 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-1593:
-
Assignee: Varun Vasudev  (was: Junping Du)

> support out-of-proc AuxiliaryServices
> -
>
> Key: YARN-1593
> URL: https://issues.apache.org/jira/browse/YARN-1593
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, rolling upgrade
>Reporter: Ming Ma
>Assignee: Varun Vasudev
> Attachments: SystemContainersandSystemServices.pdf
>
>
> AuxiliaryServices such as ShuffleHandler currently run in the same process as 
> NM. There are some benefits to host them in dedicated processes.
> 1. NM rolling restart. If we want to upgrade YARN , NM restart will force the 
> ShuffleHandler restart. If ShuffleHandler runs as a separate process, 
> ShuffleHandler can continue to run during NM restart. NM can reconnect the 
> the running ShuffleHandler after restart.
> 2. Resource management. It is possible another type of AuxiliaryServices will 
> be implemented. AuxiliaryServices are considered YARN application specific 
> and could consume lots of resources. Running AuxiliaryServices in separate 
> processes allow easier resource management. NM could potentially stop a 
> specific AuxiliaryServices process from running if it consumes resource way 
> above its allocation.
> Here are some high level ideas:
> 1. NM provides a hosting process for each AuxiliaryService. Existing 
> AuxiliaryService API doesn't change.
> 2. The hosting process provides RPC server for AuxiliaryService proxy object 
> inside NM to connect to.
> 3. When we rolling restart NM, the existing AuxiliaryService processes will 
> continue to run. NM could reconnect to the running AuxiliaryService processes 
> upon restart.
> 4. Policy and resource management of AuxiliaryServices. So far we don't have 
> immediate need for this. AuxiliaryService could run inside a container and 
> its resource utilization could be taken into account by RM and RM could 
> consider a specific type of applications overutilize cluster resource.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-1593) support out-of-proc AuxiliaryServices

2016-11-09 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-1593:

Attachment: SystemContainersandSystemServices.pdf

Uploaded a document with our initial thoughts. This is not a comprehensive 
design document because we're interested in what folks in the community think 
about some of these issues. Once we have some feedback, we can do a second 
revision which can be closer to a design document.

[~haibochen], [~kasha], [~asuresh], [~chris.douglas], [~jlowe] - we'd really 
appreciate your thoughts. Thanks!

> support out-of-proc AuxiliaryServices
> -
>
> Key: YARN-1593
> URL: https://issues.apache.org/jira/browse/YARN-1593
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, rolling upgrade
>Reporter: Ming Ma
>Assignee: Junping Du
> Attachments: SystemContainersandSystemServices.pdf
>
>
> AuxiliaryServices such as ShuffleHandler currently run in the same process as 
> NM. There are some benefits to host them in dedicated processes.
> 1. NM rolling restart. If we want to upgrade YARN , NM restart will force the 
> ShuffleHandler restart. If ShuffleHandler runs as a separate process, 
> ShuffleHandler can continue to run during NM restart. NM can reconnect the 
> the running ShuffleHandler after restart.
> 2. Resource management. It is possible another type of AuxiliaryServices will 
> be implemented. AuxiliaryServices are considered YARN application specific 
> and could consume lots of resources. Running AuxiliaryServices in separate 
> processes allow easier resource management. NM could potentially stop a 
> specific AuxiliaryServices process from running if it consumes resource way 
> above its allocation.
> Here are some high level ideas:
> 1. NM provides a hosting process for each AuxiliaryService. Existing 
> AuxiliaryService API doesn't change.
> 2. The hosting process provides RPC server for AuxiliaryService proxy object 
> inside NM to connect to.
> 3. When we rolling restart NM, the existing AuxiliaryService processes will 
> continue to run. NM could reconnect to the running AuxiliaryService processes 
> upon restart.
> 4. Policy and resource management of AuxiliaryServices. So far we don't have 
> immediate need for this. AuxiliaryService could run inside a container and 
> its resource utilization could be taken into account by RM and RM could 
> consider a specific type of applications overutilize cluster resource.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5843) Incorrect documentation for timeline service entityType/events REST end points

2016-11-09 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5843:
---
Summary: Incorrect documentation for timeline service entityType/events 
REST end points  (was: Documentation wrong for entityType/events rest end point)

> Incorrect documentation for timeline service entityType/events REST end points
> --
>
> Key: YARN-5843
> URL: https://issues.apache.org/jira/browse/YARN-5843
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Minor
> Attachments: YARN-5843.0001.patch, YARN-5843.0002.patch
>
>
> http(s):// address:port>/ws/v1/timeline/{entityType}/events
> {noformat}
> entityIds - The entity IDs to retrieve events for.
> limit - A limit on the number of events to return for each entity. If null, 
> defaults to 100 events per entity.
> windowStart - If not null, retrieves only events later than the given time 
> (exclusive)
> windowEnd - If not null, retrieves only events earlier than the given time 
> (inclusive)
> eventTypes - Restricts the events returned to the given types. If null, 
> events of all types will be returned.
> {noformat}
> parameter should be
> *entityId*
> *eventType*
> Mention  comma separated *entityId* and *entitytype* for multiple arguments



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5859) TestResourceLocalizationService#testParallelDownloadAttemptsForPublicResource sometimes fails

2016-11-09 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger reassigned YARN-5859:
-

Assignee: Eric Badger

> TestResourceLocalizationService#testParallelDownloadAttemptsForPublicResource 
> sometimes fails
> -
>
> Key: YARN-5859
> URL: https://issues.apache.org/jira/browse/YARN-5859
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>Assignee: Eric Badger
>
> Saw the following test failure:
> {noformat}
> Running 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService
> Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 12.011 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService
> testParallelDownloadAttemptsForPublicResource(org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService)
>   Time elapsed: 0.586 sec  <<< FAILURE!
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService.testParallelDownloadAttemptsForPublicResource(TestResourceLocalizationService.java:2108)
> {noformat}
> The assert occurred at this place in the code:
> {code}
>   // Waiting for download to start.
>   Assert.assertTrue(waitForPublicDownloadToStart(spyService, 1, 200));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5834) TestNodeStatusUpdater.testNMRMConnectionConf compares nodemanager wait time to the incorrect value

2016-11-09 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651467#comment-15651467
 ] 

Miklos Szegedi commented on YARN-5834:
--

[~lichangleo], could you please look at this bug, if the description is correct?

> TestNodeStatusUpdater.testNMRMConnectionConf compares nodemanager wait time 
> to the incorrect value
> --
>
> Key: YARN-5834
> URL: https://issues.apache.org/jira/browse/YARN-5834
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Priority: Minor
>
> The function is TestNodeStatusUpdater#testNMRMConnectionConf()
> I believe the connectionWaitMs references below were meant to be 
> nmRmConnectionWaitMs.
> {code}
> conf.setLong(YarnConfiguration.NM_RESOURCEMANAGER_CONNECT_MAX_WAIT_MS,
> nmRmConnectionWaitMs);
> conf.setLong(YarnConfiguration.RESOURCEMANAGER_CONNECT_MAX_WAIT_MS,
> connectionWaitMs);
> ...
>   long t = System.currentTimeMillis();
>   long duration = t - waitStartTime;
>   boolean waitTimeValid = (duration >= nmRmConnectionWaitMs) &&
>   (duration < (*connectionWaitMs* + delta));
>   if(!waitTimeValid) {
> // throw exception if NM doesn't retry long enough
> throw new Exception("NM should have tried re-connecting to RM during 
> " +
>   "period of at least " + *connectionWaitMs* + " ms, but " +
>   "stopped retrying within " + (*connectionWaitMs* + delta) +
>   " ms: " + e, e);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5833) Add validation to ensure default ports are unique in Configuration

2016-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651253#comment-15651253
 ] 

Hudson commented on YARN-5833:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10800 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10800/])
YARN-5833. Addendum patch to include missing changes to yarn-default.xml (arun 
suresh: rev 280357c29f867e3ef6386ea5bd0f7b7ca6fe04eb)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml


> Add validation to ensure default ports are unique in Configuration
> --
>
> Key: YARN-5833
> URL: https://issues.apache.org/jira/browse/YARN-5833
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5833.003.addendum-2.patch, 
> YARN-5833.003.addendum.patch, YARN-5833.003.patch, YARN-5883.001.patch, 
> YARN-5883.002.patch
>
>
> The default port for the AMRMProxy coincides with the one for the Collector 
> Service (port 8048). Will use a different port for the AMRMProxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-09 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651252#comment-15651252
 ] 

Sunil G commented on YARN-5611:
---

Thanks [~rohithsharma]
Patch looks generally fine for me pending jenkins.

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-hard
> Attachments: 0001-YARN-5611.patch, 0002-YARN-5611.patch, 
> 0003-YARN-5611.patch, YARN-5611.0004.patch, YARN-5611.0005.patch, 
> YARN-5611.0006.patch, YARN-5611.0007.patch, YARN-5611.0008.patch, 
> YARN-5611.0009.patch, YARN-5611.0010.patch, YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-11-09 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651248#comment-15651248
 ] 

Arun Suresh edited comment on YARN-4597 at 11/9/16 3:38 PM:


The new test failures are caused by an issue with YARN-5833. Kicking Jenkins 
again after resolving it..


was (Author: asuresh):
The new test failures are caused by an issue with YARN-5833. Kicking it off 
again..

> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
>  Labels: oct16-hard
> Attachments: YARN-4597.001.patch, YARN-4597.002.patch, 
> YARN-4597.003.patch, YARN-4597.004.patch, YARN-4597.005.patch, 
> YARN-4597.006.patch, YARN-4597.007.patch, YARN-4597.008.patch, 
> YARN-4597.009.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-11-09 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651248#comment-15651248
 ] 

Arun Suresh commented on YARN-4597:
---

The new test failures are caused by an issue with YARN-5833. Kicking it off 
again..

> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
>  Labels: oct16-hard
> Attachments: YARN-4597.001.patch, YARN-4597.002.patch, 
> YARN-4597.003.patch, YARN-4597.004.patch, YARN-4597.005.patch, 
> YARN-4597.006.patch, YARN-4597.007.patch, YARN-4597.008.patch, 
> YARN-4597.009.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5833) Add validation to ensure default ports are unique in Configuration

2016-11-09 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651207#comment-15651207
 ] 

Arun Suresh commented on YARN-5833:
---

Committed addendum patch to trunk and branch-2

> Add validation to ensure default ports are unique in Configuration
> --
>
> Key: YARN-5833
> URL: https://issues.apache.org/jira/browse/YARN-5833
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5833.003.addendum-2.patch, 
> YARN-5833.003.addendum.patch, YARN-5833.003.patch, YARN-5883.001.patch, 
> YARN-5883.002.patch
>
>
> The default port for the AMRMProxy coincides with the one for the Collector 
> Service (port 8048). Will use a different port for the AMRMProxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5833) Add validation to ensure default ports are unique in Configuration

2016-11-09 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5833:
--
Attachment: YARN-5833.003.addendum-2.patch

This patch broke the following tests:
TestAMRMProxy
TestMROpportunisticMaps
TestDistributedScheduling

Think this patch forgot to include the changes needed in {{yarn-default.xml}}. 
Attaching addendum patch.

Since these tests wont be run by Jenkins, here is the output of the tests 
running locally:

{noformat}
---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.499 sec - in 
org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy
Running org.apache.hadoop.yarn.client.api.impl.TestDistributedScheduling
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 71.654 sec - in 
org.apache.hadoop.yarn.client.api.impl.TestDistributedScheduling

Results :

Tests run: 7, Failures: 0, Errors: 0, Skipped: 0
...
...
---
 T E S T S
---
Running org.apache.hadoop.mapred.TestMROpportunisticMaps
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 115.501 sec - 
in org.apache.hadoop.mapred.TestMROpportunisticMaps

Results :

Tests run: 3, Failures: 0, Errors: 0, Skipped: 0

...
{noformat}


> Add validation to ensure default ports are unique in Configuration
> --
>
> Key: YARN-5833
> URL: https://issues.apache.org/jira/browse/YARN-5833
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5833.003.addendum-2.patch, 
> YARN-5833.003.addendum.patch, YARN-5833.003.patch, YARN-5883.001.patch, 
> YARN-5883.002.patch
>
>
> The default port for the AMRMProxy coincides with the one for the Collector 
> Service (port 8048). Will use a different port for the AMRMProxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5833) Add validation to ensure default ports are unique in Configuration

2016-11-09 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651190#comment-15651190
 ] 

Arun Suresh edited comment on YARN-5833 at 11/9/16 3:10 PM:


This patch broke the following tests:
TestAMRMProxy
TestMROpportunisticMaps
TestDistributedScheduling

Think we forgot to include the changes needed in {{yarn-default.xml}}. 
Attaching addendum patch.

Since these tests wont be run by Jenkins, here is the output of the tests 
running locally:

{noformat}
---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.499 sec - in 
org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy
Running org.apache.hadoop.yarn.client.api.impl.TestDistributedScheduling
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 71.654 sec - in 
org.apache.hadoop.yarn.client.api.impl.TestDistributedScheduling

Results :

Tests run: 7, Failures: 0, Errors: 0, Skipped: 0
...
...
---
 T E S T S
---
Running org.apache.hadoop.mapred.TestMROpportunisticMaps
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 115.501 sec - 
in org.apache.hadoop.mapred.TestMROpportunisticMaps

Results :

Tests run: 3, Failures: 0, Errors: 0, Skipped: 0

...
{noformat}



was (Author: asuresh):
This patch broke the following tests:
TestAMRMProxy
TestMROpportunisticMaps
TestDistributedScheduling

Think this patch forgot to include the changes needed in {{yarn-default.xml}}. 
Attaching addendum patch.

Since these tests wont be run by Jenkins, here is the output of the tests 
running locally:

{noformat}
---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.499 sec - in 
org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy
Running org.apache.hadoop.yarn.client.api.impl.TestDistributedScheduling
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 71.654 sec - in 
org.apache.hadoop.yarn.client.api.impl.TestDistributedScheduling

Results :

Tests run: 7, Failures: 0, Errors: 0, Skipped: 0
...
...
---
 T E S T S
---
Running org.apache.hadoop.mapred.TestMROpportunisticMaps
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 115.501 sec - 
in org.apache.hadoop.mapred.TestMROpportunisticMaps

Results :

Tests run: 3, Failures: 0, Errors: 0, Skipped: 0

...
{noformat}


> Add validation to ensure default ports are unique in Configuration
> --
>
> Key: YARN-5833
> URL: https://issues.apache.org/jira/browse/YARN-5833
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5833.003.addendum-2.patch, 
> YARN-5833.003.addendum.patch, YARN-5833.003.patch, YARN-5883.001.patch, 
> YARN-5883.002.patch
>
>
> The default port for the AMRMProxy coincides with the one for the Collector 
> Service (port 8048). Will use a different port for the AMRMProxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-09 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5611:

Attachment: YARN-5611.0010.patch

Updated patch fixing review comments and findbug issue. 

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-hard
> Attachments: 0001-YARN-5611.patch, 0002-YARN-5611.patch, 
> 0003-YARN-5611.patch, YARN-5611.0004.patch, YARN-5611.0005.patch, 
> YARN-5611.0006.patch, YARN-5611.0007.patch, YARN-5611.0008.patch, 
> YARN-5611.0009.patch, YARN-5611.0010.patch, YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5836) NMToken passwd not checked in ContainerManagerImpl, malicious AM can fake the Token and kill containers of other apps at will

2016-11-09 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651110#comment-15651110
 ] 

Botong Huang commented on YARN-5836:


It is theoretical so far. I am in the process of verifying it. I will update 
here when I get some results, thanks! 

> NMToken passwd not checked in ContainerManagerImpl, malicious AM can fake the 
> Token and kill containers of other apps at will
> -
>
> Key: YARN-5836
> URL: https://issues.apache.org/jira/browse/YARN-5836
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
>   Original Estimate: 5h
>  Remaining Estimate: 5h
>
> When AM calls NM via stopContainers() in ContainerManagementProtocol, the 
> NMToken (generated by RM) is passed along via the user ugi. However currently 
> ContainerManagerImpl is not validating this token correctly, specifically in 
> authorizeGetAndStopContainerRequest() in ContainerManagerImpl. Basically it 
> blindly trusts the content in the NMTokenIdentifier without verifying the 
> password (RM generated signature) in the NMToken, so that malicious AM can 
> just fake the content in the NMTokenIdentifier and pass it to NMs. Moreover, 
> currently even for plain text checking, when the appId doesn’t match, all it 
> does is log it as a warning and continues to kill the container…
> For startContainers the NMToken is not checked correctly in authorizeUser() 
> as well, however the ContainerToken is verified properly by regenerating and 
> comparing the password in verifyAndGetContainerTokenIdentifier(), so that 
> malicious AM cannot launch containers at will. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5836) NMToken passwd not checked in ContainerManagerImpl, malicious AM can fake the Token and kill containers of other apps at will

2016-11-09 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651095#comment-15651095
 ] 

Jason Lowe commented on YARN-5836:
--

So have you verified that a faked NM token "works" or was this theoretical?  If 
there's a case on a secure cluster where a faked NM token allowed an 
application master (or other agent) to connect to the NM then that's serious 
and needs to be fixed.  Otherwise we should update the JIRA headline to reflect 
this is tracking the missing exception for the invalid stop container request.

> NMToken passwd not checked in ContainerManagerImpl, malicious AM can fake the 
> Token and kill containers of other apps at will
> -
>
> Key: YARN-5836
> URL: https://issues.apache.org/jira/browse/YARN-5836
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
>   Original Estimate: 5h
>  Remaining Estimate: 5h
>
> When AM calls NM via stopContainers() in ContainerManagementProtocol, the 
> NMToken (generated by RM) is passed along via the user ugi. However currently 
> ContainerManagerImpl is not validating this token correctly, specifically in 
> authorizeGetAndStopContainerRequest() in ContainerManagerImpl. Basically it 
> blindly trusts the content in the NMTokenIdentifier without verifying the 
> password (RM generated signature) in the NMToken, so that malicious AM can 
> just fake the content in the NMTokenIdentifier and pass it to NMs. Moreover, 
> currently even for plain text checking, when the appId doesn’t match, all it 
> does is log it as a warning and continues to kill the container…
> For startContainers the NMToken is not checked correctly in authorizeUser() 
> as well, however the ContainerToken is verified properly by regenerating and 
> comparing the password in verifyAndGetContainerTokenIdentifier(), so that 
> malicious AM cannot launch containers at will. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651054#comment-15651054
 ] 

Hadoop QA commented on YARN-5611:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  9m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 52s{color} | {color:orange} root: The patch generated 15 new + 765 unchanged 
- 3 fixed = 780 total (was 768) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
24s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
32s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m  
1s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 37m 
45s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}119m  7s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
57s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}262m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Call to 
java.util.EnumSet.equals(org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppState)
 in 

[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15650971#comment-15650971
 ] 

Hadoop QA commented on YARN-4597:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  3m 
29s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
29s{color} | {color:red} 
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/findbugsXml.xml)
 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
56s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 25s{color} | {color:orange} root: The patch generated 15 new + 1052 
unchanged - 16 fixed = 1067 total (was 1068) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
29s{color} | {color:red} 
patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/findbugsXml.xml)
 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 0 new + 235 unchanged - 1 fixed = 235 total (was 236) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-yarn-server-tests in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-mapreduce-client-jobclient in 

[jira] [Commented] (YARN-5736) YARN container executor config does not handle white space

2016-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15650865#comment-15650865
 ] 

Hudson commented on YARN-5736:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10798 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10798/])
YARN-5736. YARN container executor config does not handle white space (rkanter: 
rev 09f43fa9c089ebfc18401ce84755d3f2000ba033)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c


> YARN container executor config does not handle white space
> --
>
> Key: YARN-5736
> URL: https://issues.apache.org/jira/browse/YARN-5736
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Trivial
>  Labels: oct16-medium
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5736.001.patch, YARN-5736.002.patch, 
> YARN_5736.000.patch
>
>
> The container executor configuration reader does not handle white spaces or 
> malformed key value pairs in the config file correctly or gracefully
> as an example the following key value line which is part of the configuration 
> (note the << is used as a marker to show the extra trailing space):
> yarn.nodemanager.linux-container-executor.group=yarn <<
> is a valid line but when you run the check over the file:
> [root@test]#./container-executor --checksetup
> Can't get group information for yarn - Success.
> [root@test]#
> It fails to find the yarn group but it really tries to find the "yarn " group 
> which fails. There is no trimming anywhere while processing the lines. If a 
> space would be added in before or after the = sign a failure would also occur.
> Minor nit is the fact that a failure still is logged as a Success



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5862) TestDiskFailures.testLocalDirsFailures failed

2016-11-09 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15650795#comment-15650795
 ] 

Varun Saxena commented on YARN-5862:


Changes look good to me.
Will commit it later today.

> TestDiskFailures.testLocalDirsFailures failed
> -
>
> Key: YARN-5862
> URL: https://issues.apache.org/jira/browse/YARN-5862
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5862.001.patch
>
>
> {code}
> java.util.NoSuchElementException: null 
> at 
> java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
>  
> at 
> java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
>  
> at 
> org.apache.hadoop.yarn.server.TestDiskFailures.verifyDisksHealth(TestDiskFailures.java:247)
>  
> at 
> org.apache.hadoop.yarn.server.TestDiskFailures.testDirsFailures(TestDiskFailures.java:179)
>  
> at 
> org.apache.hadoop.yarn.server.TestDiskFailures.testLocalDirsFailures(TestDiskFailures.java:99)
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5843) Documentation wrong for entityType/events rest end point

2016-11-09 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15650736#comment-15650736
 ] 

Varun Saxena commented on YARN-5843:


+1.
Will commit it later today unless there are further comments.

> Documentation wrong for entityType/events rest end point
> 
>
> Key: YARN-5843
> URL: https://issues.apache.org/jira/browse/YARN-5843
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Minor
> Attachments: YARN-5843.0001.patch, YARN-5843.0002.patch
>
>
> http(s):// address:port>/ws/v1/timeline/{entityType}/events
> {noformat}
> entityIds - The entity IDs to retrieve events for.
> limit - A limit on the number of events to return for each entity. If null, 
> defaults to 100 events per entity.
> windowStart - If not null, retrieves only events later than the given time 
> (exclusive)
> windowEnd - If not null, retrieves only events earlier than the given time 
> (inclusive)
> eventTypes - Restricts the events returned to the given types. If null, 
> events of all types will be returned.
> {noformat}
> parameter should be
> *entityId*
> *eventType*
> Mention  comma separated *entityId* and *entitytype* for multiple arguments



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5375) invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures

2016-11-09 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15650595#comment-15650595
 ] 

Sunil G commented on YARN-5375:
---

Patch looks good for me too. I also will share  a feedback with some tests to 
see whether we have some more random issues still.

> invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures
> --
>
> Key: YARN-5375
> URL: https://issues.apache.org/jira/browse/YARN-5375
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: sandflee
>Assignee: sandflee
>  Labels: oct16-medium
> Attachments: YARN-5375.01.patch, YARN-5375.03.patch, 
> YARN-5375.04.patch, YARN-5375.05.patch, YARN-5375.06.patch, 
> YARN-5375.07-drain-statestore.patch, YARN-5375.07-sync-statestore.patch, 
> YARN-5375.08.patch, YARN-5375.09.patch
>
>
> seen many test failures related to RMApp/RMAppattempt comes to some state but 
> some event are not processed in rm event queue or scheduler event queue, 
> cause test failure, seems we could implicitly invokes drainEvents(should also 
> drain sheduler event) in some mockRM method like waitForState



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-11-09 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15650551#comment-15650551
 ] 

Varun Saxena edited comment on YARN-5739 at 11/9/16 10:26 AM:
--

bq. WRT caching, I am wondering, if there might be a query coming next for the 
details of these entity types?
Caching is set per Scan. Right ? Anyways alternatively PageFilter can be used 
to retrieve only one record per RS from backend.


was (Author: varun_saxena):
bq. WRT caching, I am wondering, if there might be a query coming next for the 
details of these entity types?
Caching is set per Scan. Right ? Anyways alternatively PageFilter can be used 
to retrieve only one record from backend.

> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5739-YARN-5355.001.patch
>
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-09 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15650567#comment-15650567
 ] 

Sunil G commented on YARN-5611:
---

Hi [~rohithsharma],
Few more comments:

1. StateStore exception is thrown back as it is. Its better to we can wrap and 
send back error to client with more informations such as appId etc.

2. As per a 
[comment|https://issues.apache.org/jira/browse/MAPREDUCE-5870?focusedCommentId=14737035=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14737035]
 from Jason earlier in MAPREDUCE-5870,  we need not have to throw exception for 
apps which are completing. A similar approach is done for app priority in 
YARN-4141. I think we might need to consider same for app timeouts too.


> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-hard
> Attachments: 0001-YARN-5611.patch, 0002-YARN-5611.patch, 
> 0003-YARN-5611.patch, YARN-5611.0004.patch, YARN-5611.0005.patch, 
> YARN-5611.0006.patch, YARN-5611.0007.patch, YARN-5611.0008.patch, 
> YARN-5611.0009.patch, YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-11-09 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15650551#comment-15650551
 ] 

Varun Saxena commented on YARN-5739:


bq. WRT caching, I am wondering, if there might be a query coming next for the 
details of these entity types?
Caching is set per Scan. Right ? Anyways alternatively PageFilter can be used 
to retrieve only one record from backend.

> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5739-YARN-5355.001.patch
>
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-09 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5611:

Attachment: YARN-5611.0009.patch

updating patch with following changes with typical transaction approach.
# Store it in RMStateStore. 
# Update in-memory. 
# No two updates for same application can go together. 

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-hard
> Attachments: 0001-YARN-5611.patch, 0002-YARN-5611.patch, 
> 0003-YARN-5611.patch, YARN-5611.0004.patch, YARN-5611.0005.patch, 
> YARN-5611.0006.patch, YARN-5611.0007.patch, YARN-5611.0008.patch, 
> YARN-5611.0009.patch, YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5846) Improve the fairscheduler attemptScheduler

2016-11-09 Thread zhangyubiao (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15650458#comment-15650458
 ] 

zhangyubiao commented on YARN-5846:
---

:)

> Improve the fairscheduler attemptScheduler 
> ---
>
> Key: YARN-5846
> URL: https://issues.apache.org/jira/browse/YARN-5846
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
> Environment: CentOS-7.1
>Reporter: zhengchenyu
>Priority: Critical
>  Labels: fairscheduler
> Fix For: 2.7.1
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> when I assign a container, we must consider two factor:
> (1) sort the queue and application, and select the proper request. 
> (2) then we assure this request's host is just this node (data locality). 
> or skip this loop!
> this algorithm regard the sorting queue and application as primary factor. 
> when yarn consider data locality, for example, 
> yarn.scheduler.fair.locality.threshold.node=1, 
> yarn.scheduler.fair.locality.threshold.rack=1 (or 
> yarn.scheduler.fair.locality-delay-rack-ms and 
> yarn.scheduler.fair.locality-delay-node-ms is very large) and lots of 
> applications are runnig, the process of assigning contianer becomes very slow.
> I think data locality is more important then the sequence of the queue and 
> applications. 
> I wanna a new algorithm like this:
>   (1) when resourcemanager accept a new request, notice the RMNodeImpl, 
> and then record this association between RMNode and request
>   (2) when assign containers for node, we assign container by 
> RMNodeImpl's association between RMNode and request directly
>   (3) then I consider the priority of queue and applation. In one object 
> of RMNodeImpl, we sort the request of association.
>   (4) and I think the sorting of current algorithm is consuming, in 
> especial, losts of applications are running, lots of sorting are called. so I 
> think we should sort the queue and applicaiton in a daemon thread, because 
> less error of queues's sequences is allowed.
>   
>   
>   
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5846) Improve the fairscheduler attemptScheduler

2016-11-09 Thread zhengchenyu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengchenyu updated YARN-5846:
--
Priority: Critical  (was: Minor)

> Improve the fairscheduler attemptScheduler 
> ---
>
> Key: YARN-5846
> URL: https://issues.apache.org/jira/browse/YARN-5846
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
> Environment: CentOS-7.1
>Reporter: zhengchenyu
>Priority: Critical
>  Labels: fairscheduler
> Fix For: 2.7.1
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> when I assign a container, we must consider two factor:
> (1) sort the queue and application, and select the proper request. 
> (2) then we assure this request's host is just this node (data locality). 
> or skip this loop!
> this algorithm regard the sorting queue and application as primary factor. 
> when yarn consider data locality, for example, 
> yarn.scheduler.fair.locality.threshold.node=1, 
> yarn.scheduler.fair.locality.threshold.rack=1 (or 
> yarn.scheduler.fair.locality-delay-rack-ms and 
> yarn.scheduler.fair.locality-delay-node-ms is very large) and lots of 
> applications are runnig, the process of assigning contianer becomes very slow.
> I think data locality is more important then the sequence of the queue and 
> applications. 
> I wanna a new algorithm like this:
>   (1) when resourcemanager accept a new request, notice the RMNodeImpl, 
> and then record this association between RMNode and request
>   (2) when assign containers for node, we assign container by 
> RMNodeImpl's association between RMNode and request directly
>   (3) then I consider the priority of queue and applation. In one object 
> of RMNodeImpl, we sort the request of association.
>   (4) and I think the sorting of current algorithm is consuming, in 
> especial, losts of applications are running, lots of sorting are called. so I 
> think we should sort the queue and applicaiton in a daemon thread, because 
> less error of queues's sequences is allowed.
>   
>   
>   
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5823) Update NMTokens in case of requests with only opportunistic containers

2016-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15650342#comment-15650342
 ] 

Hudson commented on YARN-5823:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10797 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10797/])
YARN-5823. Update NMTokens in case of requests with only opportunistic (arun 
suresh: rev 283fa33febe043bd7b4fa87546be26c9c5a8f8b5)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestOpportunisticContainerAllocation.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/scheduler/DistributedScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/OpportunisticContainerAllocatorAMService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/scheduler/OpportunisticContainerAllocator.java


> Update NMTokens in case of requests with only opportunistic containers
> --
>
> Key: YARN-5823
> URL: https://issues.apache.org/jira/browse/YARN-5823
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>Priority: Blocker
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5823.001.patch, YARN-5823.002.patch, 
> YARN-5823.003.patch, YARN-5823.004.patch
>
>
> At the moment, when an {{AllocateRequest}} contains only opportunistic 
> {{ResourceRequests}}, the updated NMTokens are not properly added to the 
> {{AllocateResponse}}.
> In such a case the AM does not get back the needed NMTokens that are required 
> to start the opportunistic containers at the respective nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >