[jira] [Commented] (YARN-3854) Add localization support for docker images

2016-08-31 Thread Zhankun Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15454397#comment-15454397
 ] 

Zhankun Tang commented on YARN-3854:


We need to delete the application scoped Docker image in local host once the 
application finished. So this hint file is used for storing Docker images which 
might be deleted at some point by DeletionService. I guess we should do it this 
way but this is not the final version. Any possible options are welcomed.

> Add localization support for docker images
> --
>
> Key: YARN-3854
> URL: https://issues.apache.org/jira/browse/YARN-3854
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Zhankun Tang
> Attachments: YARN-3854-branch-2.8.001.patch, 
> YARN-3854_Localization_support_for_Docker_image_v1.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v2.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v3.pdf
>
>
> We need the ability to localize docker images when those images aren't 
> already available locally. There are various approaches that could be used 
> here with different trade-offs/issues : image archives on HDFS + docker load 
> ,  docker pull during the localization phase or (automatic) docker pull 
> during the run/launch phase. 
> We also need the ability to clean-up old/stale, unused images. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5554) MoveApplicationAcrossQueues does not check user permission on the target queue

2016-08-31 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15454389#comment-15454389
 ] 

Jian He commented on YARN-5554:
---

For the permission part: should we check  (submit_acl_on_target_queue || 
target_queue_adminAcl) && application_acl
The first half means permission on the target queue;  The second half means 
permission on application itself.
I think the first half is also what being used currently in SubmitApplication

> MoveApplicationAcrossQueues does not check user permission on the target queue
> --
>
> Key: YARN-5554
> URL: https://issues.apache.org/jira/browse/YARN-5554
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Haibo Chen
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-5554.2.patch, YARN-5554.3.patch
>
>
> moveApplicationAcrossQueues operation currently does not check user 
> permission on the target queue. This incorrectly allows one user to move 
> his/her own applications to a queue that the user has no access to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5608) TestAMRMClient.setup() fails with ArrayOutOfBoundsException

2016-08-31 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5608:

Attachment: YARN-5608.patch

Added a assertion to verify number of nodeReports are same as nodeCount. But 
again test will fail with added assertion. Real root cause need to find why 
nodes did not get registered. May be test run logs will help to know this 
details

> TestAMRMClient.setup() fails with ArrayOutOfBoundsException
> ---
>
> Key: YARN-5608
> URL: https://issues.apache.org/jira/browse/YARN-5608
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
> Attachments: YARN-5608.patch
>
>
> After 39 runs the {{TestAMRMClient}} test, I encountered:
> {noformat}
> java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>   at java.util.ArrayList.rangeCheck(ArrayList.java:635)
>   at java.util.ArrayList.get(ArrayList.java:411)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.setup(TestAMRMClient.java:144)
> {noformat}
> I see it shows up occasionally in the error emails as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-08-31 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15454304#comment-15454304
 ] 

Rohith Sharma K S commented on YARN-5561:
-

Separating out YARN specific details is good idea similar to v1. Here is my 
vote will be 50-50 for this approach.

In v1, entities were fully separated out from yarn specific details. But in v2, 
Apart from the entities, *Query Apps for a Flow* and *Query Apps for a Flow 
Run* and other details are in TimelineReaderWebService. These are belongs to 
YARN specific details nevertheless of any underlying storage schema.  All the 
entities are published under application scope which makes decision harder for 
devs to adding a new REST YARN specific end points. 
 
>From the user perspective,  I want to share you that say for retrieving all 
>the apps with flow/flowrun uses path /ws/v2/timeline, but for retrieving 
>attempts uses path /ws/v2/applicationhistory would lead to big question for 
>users why there are 2 different Path for same application details!!!. 

May be we can takes other folks thoughts too on this. 

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4232) TopCLI console support for HA mode

2016-08-31 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15454294#comment-15454294
 ] 

Bibin A Chundatt commented on YARN-4232:


Verified the same in secure kerberos cluster its working fine

> TopCLI console support for HA mode
> --
>
> Key: YARN-4232
> URL: https://issues.apache.org/jira/browse/YARN-4232
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Minor
> Attachments: 0001-YARN-4232.patch, 0002-YARN-4232.patch, 
> YARN-4232.003.patch
>
>
> *Steps to reproduce*
> Start Top command in YARN in HA mode
> ./yarn top
> {noformat}
> usage: yarn top
>  -cols  Number of columns on the terminal
>  -delay The refresh delay(in seconds), default is 3 seconds
>  -help   Print usage; for help while the tool is running press 'h'
>  + Enter
>  -queuesComma separated list of queues to restrict applications
>  -rows  Number of rows on the terminal
>  -types Comma separated list of types to restrict applications,
>  case sensitive(though the display is lower case)
>  -users Comma separated list of users to restrict applications
> {noformat}
> Execute *for help while the tool is running press 'h'  + Enter* while top 
> tool is running
> Exception is thrown in console continuously
> {noformat}
> 15/10/07 14:59:28 ERROR cli.TopCLI: Could not fetch RM start time
> java.net.ConnectException: Connection refused
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
> at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:204)
> at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:589)
> at java.net.Socket.connect(Socket.java:538)
> at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
> at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
> at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
> at sun.net.www.http.HttpClient.(HttpClient.java:211)
> at sun.net.www.http.HttpClient.New(HttpClient.java:308)
> at sun.net.www.http.HttpClient.New(HttpClient.java:326)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1168)
> at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1104)
> at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:998)
> at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:932)
> at 
> org.apache.hadoop.yarn.client.cli.TopCLI.getRMStartTime(TopCLI.java:742)
> at org.apache.hadoop.yarn.client.cli.TopCLI.run(TopCLI.java:467)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.yarn.client.cli.TopCLI.main(TopCLI.java:420)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4997) Update fair scheduler to use pluggable auth provider

2016-08-31 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated YARN-4997:
--
Attachment: YARN-4997-008.patch

> Update fair scheduler to use pluggable auth provider
> 
>
> Key: YARN-4997
> URL: https://issues.apache.org/jira/browse/YARN-4997
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Tao Jie
> Attachments: YARN-4997-001.patch, YARN-4997-002.patch, 
> YARN-4997-003.patch, YARN-4997-004.patch, YARN-4997-005.patch, 
> YARN-4997-006.patch, YARN-4997-007.patch, YARN-4997-008.patch
>
>
> Now that YARN-3100 has made the authorization pluggable, it should be 
> supported by the fair scheduler.  YARN-3100 only updated the capacity 
> scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4997) Update fair scheduler to use pluggable auth provider

2016-08-31 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15454253#comment-15454253
 ] 

Tao Jie commented on YARN-4997:
---

I looked more closely at {{synchronized}} in onReload. {{onReload}} here is 
under the lock of {{AllocationFileLoaderService}}, but initialization of 
authorizer is under the lock of {{FairScheduler}}. As a result, we'd better to 
keep all access to authorizer under the same lock of 
{{FairScheduler}}(Actually, initScheduler and reload won't happen at the same 
time, but they are called in different threads). 

> Update fair scheduler to use pluggable auth provider
> 
>
> Key: YARN-4997
> URL: https://issues.apache.org/jira/browse/YARN-4997
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Tao Jie
> Attachments: YARN-4997-001.patch, YARN-4997-002.patch, 
> YARN-4997-003.patch, YARN-4997-004.patch, YARN-4997-005.patch, 
> YARN-4997-006.patch, YARN-4997-007.patch
>
>
> Now that YARN-3100 has made the authorization pluggable, it should be 
> supported by the fair scheduler.  YARN-3100 only updated the capacity 
> scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5606) Support multi-label merge into one node label

2016-08-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15454254#comment-15454254
 ] 

Hadoop QA commented on YARN-5606:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 55s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 33s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 41s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 4 
new + 457 unchanged - 0 fixed = 461 total (was 457) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 23s {color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 16s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 52s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 29s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodeLabels |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826544/YARN-5606.002.patch |
| JIRA Issue | YARN-5606 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bdf48c59893a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6f4b0d3 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (YARN-5582) SchedulerUtils#validate vcores even for DefaultResourceCalculator

2016-08-31 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15454243#comment-15454243
 ] 

Rohith Sharma K S commented on YARN-5582:
-

+1 for the using Resources for comparisons. 

> SchedulerUtils#validate vcores even for DefaultResourceCalculator
> -
>
> Key: YARN-5582
> URL: https://issues.apache.org/jira/browse/YARN-5582
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>
> Configure Memory=20 GB core 3 Vcores
> Submit request for 5 containers with memory 4 Gb  and  5 core each from 
> mapreduce application.
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException):
>  Invalid resource request, requested virtual cores < 0, or requested virtual 
> cores > max configured, requestedVirtualCores=5, maxVirtualCores=3
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:274)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:234)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndvalidateRequest(SchedulerUtils.java:250)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.normalizeAndValidateRequests(RMServerUtils.java:105)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:703)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:65)
> at 
> org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:115)
> {noformat}
> Shouldnot validate core when resource calculator is 
> {{org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-08-31 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15454227#comment-15454227
 ] 

Rohith Sharma K S commented on YARN-5585:
-

bq. Can we translate the fromId request into some HBase filters so that we can 
process this request on the storage layer?
Ultimately this will be better way to do rather than at TimelineReader API 
level. But I am not sure that does HBase filters can support to scan the rows 
which are less than or greater than id's. I will have look at this with high 
priority. This make more sense to me. 

bq. but note that this requires some in-memory operation to actually sort all 
entities, but not only read part of them out from the storage?
This is current behavior. Already entities are sorted using 
TimelineClient#compareTo in {{TimelineEntityReader#readEntities}}.  Another 
loop on sorted entities required to achieve this.

bq. he problem with this kind of an approach is that new apps keep on getting 
added so result may not be latest.
It is fine to me. I agree with Lilu.



> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-10. But to retrieve next 5 apps, it is 
> difficult.
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3854) Add localization support for docker images

2016-08-31 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15454199#comment-15454199
 ] 

Jian He commented on YARN-3854:
---

Thanks [~tangzhankun],  question on this:
bq.  leave a hint file in corresponding resource local directory to record the 
image name.
What is this hint file used for ?

> Add localization support for docker images
> --
>
> Key: YARN-3854
> URL: https://issues.apache.org/jira/browse/YARN-3854
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Zhankun Tang
> Attachments: YARN-3854-branch-2.8.001.patch, 
> YARN-3854_Localization_support_for_Docker_image_v1.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v2.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v3.pdf
>
>
> We need the ability to localize docker images when those images aren't 
> already available locally. There are various approaches that could be used 
> here with different trade-offs/issues : image archives on HDFS + docker load 
> ,  docker pull during the localization phase or (automatic) docker pull 
> during the run/launch phase. 
> We also need the ability to clean-up old/stale, unused images. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5601) Make the RM epoch base value configurable

2016-08-31 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15454177#comment-15454177
 ] 

Jian He commented on YARN-5601:
---

sounds good, can you suppress the findbugs warning ?

> Make the RM epoch base value configurable
> -
>
> Key: YARN-5601
> URL: https://issues.apache.org/jira/browse/YARN-5601
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-5601-YARN-2915-v1.patch
>
>
> Currently the epoch always starts from zero. This can cause container ids to 
> conflict for an application under Federation that spans multiple RMs 
> concurrently. This JIRA proposes to make the RM epoch base value configurable 
> which will allow us to avoid conflicts by setting different values for each 
> RM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5221) Expose UpdateResourceRequest API to allow AM to request for change in container properties

2016-08-31 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15454158#comment-15454158
 ] 

Arun Suresh commented on YARN-5221:
---

The failed tests run fine locally for me. Committing this to branch-2.8 shortly 
resolving this.

> Expose UpdateResourceRequest API to allow AM to request for change in 
> container properties
> --
>
> Key: YARN-5221
> URL: https://issues.apache.org/jira/browse/YARN-5221
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5221-branch-2-v1.patch, 
> YARN-5221-branch-2.8-v1.patch, YARN-5221.001.patch, YARN-5221.002.patch, 
> YARN-5221.003.patch, YARN-5221.004.patch, YARN-5221.005.patch, 
> YARN-5221.006.patch, YARN-5221.007.patch, YARN-5221.008.patch, 
> YARN-5221.009.patch, YARN-5221.010.patch, YARN-5221.011.patch, 
> YARN-5221.012.patch, YARN-5221.013.patch
>
>
> YARN-1197 introduced APIs to allow an AM to request for Increase and Decrease 
> of Container Resources after initial allocation.
> YARN-5085 proposes to allow an AM to request for a change of Container 
> ExecutionType.
> This JIRA proposes to unify both of the above into an Update Container API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5221) Expose UpdateResourceRequest API to allow AM to request for change in container properties

2016-08-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15454132#comment-15454132
 ] 

Hadoop QA commented on YARN-5221:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 29s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 28 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 44s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
37s {color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 54s 
{color} | {color:green} branch-2.8 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 33s 
{color} | {color:green} branch-2.8 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
19s {color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 45s 
{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
29s {color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 4s 
{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 40s 
{color} | {color:green} branch-2.8 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 5s 
{color} | {color:green} branch-2.8 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 58s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 58s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 17s 
{color} | {color:red} root: The patch generated 12 new + 925 unchanged - 70 
fixed = 937 total (was 995) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.8.0_101 with JDK 
v1.8.0_101 generated 0 new + 149 unchanged - 7 fixed = 149 total (was 156) 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_101. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed with JDK 
v1.8.0_101. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.8.0_101. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} 

[jira] [Updated] (YARN-5606) Support multi-label merge into one node label

2016-08-31 Thread jialei weng (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jialei weng updated YARN-5606:
--
Attachment: YARN-5606.002.patch

> Support multi-label merge into one node label
> -
>
> Key: YARN-5606
> URL: https://issues.apache.org/jira/browse/YARN-5606
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: jialei weng
> Attachments: YARN-5606.001.patch, YARN-5606.002.patch
>
>
> Support multi-label merge into one node label
> 1. we want to support multo-label like SSD,GPU,FPGA label merged into single 
> machine, joined by &. like SSD,GPU,FPGA-> SSD
> 2. we add wildcard match to extend the job request. we define the wildcard 
> like *GPU*, it will math all the node labels with GPU as part of multi-label 
> merged label. For example, *GPU* will match  SSD, GPU We 
> define  SSD={SSD,GPU,FPGA}, and GPU is one of {SSD,GPU,FPGA}, so the 
> job can run on  SSD node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4997) Update fair scheduler to use pluggable auth provider

2016-08-31 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15454117#comment-15454117
 ] 

Tao Jie commented on YARN-4997:
---

Thank you for your comments, [~kasha].
{quote}
Noticed there is QueueACL is mapreduce code as well that can be dropped 
altogether? e.g. mapred QueueManager, many parts (all of?) QueueACL etc. Can we 
file a follow-up JIRA to drop all of that?
{quote}
I am not sure if such code in mapred.QueueMananger still works today. I prefer 
to clean up those mapreduce code in another JIRA.
{quote} 
onReload: Is there a need to lock the scheduler when setting permissions? Would 
it be okay to limit the synchronized block to whatever was synchronized before?
{quote}
Have disscussed with [~templedf], synchronized block added here is to avoid 
findbugs warning. Actually I will be glad to remove redundant lock here.
{quote}
In setQueueAcls, we seem to initially set to default permissions and then 
"overwrite" it with final permissions. Is the first one necessary? I quickly 
looked at implementation of ConfiguredAuthorizationProvider, setPermission's 
semantics appear to be somewhere between append and overwrite. If it is append, 
may be we should change that name to addPermission?
{quote}
I also feel a little confused about semantics of {{setPermission}}. However 
this abstract method is introduced by YARN-3100, and I'm not sure 
{{setPermission}} has implemented in Ranger or Sentry. I prefer to keep 
{{setPermission}} here as CapacityScheduler does to keep compatibility. Maybe 
we could refactor it in another JIRA, (maybe could separate {{setPermission}} 
to {{setPermission}}, {{addPermission}}, {{removePermission}}, 
{{clearPermission}}). Does it make sense?
And I will update this patch soon.

> Update fair scheduler to use pluggable auth provider
> 
>
> Key: YARN-4997
> URL: https://issues.apache.org/jira/browse/YARN-4997
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Tao Jie
> Attachments: YARN-4997-001.patch, YARN-4997-002.patch, 
> YARN-4997-003.patch, YARN-4997-004.patch, YARN-4997-005.patch, 
> YARN-4997-006.patch, YARN-4997-007.patch
>
>
> Now that YARN-3100 has made the authorization pluggable, it should be 
> supported by the fair scheduler.  YARN-3100 only updated the capacity 
> scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4855) Should check if node exists when replace nodelabels

2016-08-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15454040#comment-15454040
 ] 

Hadoop QA commented on YARN-4855:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The 
patch generated 5 new + 97 unchanged - 3 fixed = 102 total (was 100) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 37s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 59s 
{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 20s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
|  |  Write to static field 
org.apache.hadoop.yarn.client.cli.RMAdminCLI.yarnClient from instance method 
org.apache.hadoop.yarn.client.cli.RMAdminCLI.setYarnClient(YarnClient)  At 
RMAdminCLI.java:from instance method 
org.apache.hadoop.yarn.client.cli.RMAdminCLI.setYarnClient(YarnClient)  At 
RMAdminCLI.java:[line 179] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826540/YARN-4855.006.patch |
| JIRA Issue | YARN-4855 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9e39ff4426f6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6f4b0d3 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12980/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/12980/artifact/patchprocess/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.html
 |
|  Test Results | 

[jira] [Comment Edited] (YARN-5601) Make the RM epoch base value configurable

2016-08-31 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453991#comment-15453991
 ] 

Subru Krishnan edited comment on YARN-5601 at 9/1/16 1:43 AM:
--

[~jianhe], to answer your question let me start with why we need epoch in a 
federated cluster:
currently only a single RM generates containerIDs (applicationID + a sequence 
number) but in a federated cluster, there are multiple RMs that are 
concurrently generating them. So there will be conflicts if an application 
spans across multiple sub-clusters. To avoid this conflict, we use epoch in a 
federated cluster similar to how it's used in the context of work preserving 
restarts to prevent conflicts.

The idea is we will set epoch number to be 0 for first sub-cluster RM, 1 
for second sub-cluster RM, 2 for third sub-cluster RM, etc. This should be 
sufficient as we have 1M epochs as they are represented as a 20bit integer. 
With this, there will be a conflict of containerIDs only if *all* of the below 
conditions are satisfied: 
  # The RM of sub-cluster 1 is rebooted over 1 times 
  # There is a running App the is still running (during over 10k reboots of one 
of the RMs)
  # The app is run across sub-cluster 1 and sub-cluster 2
  # The app is still holding onto containers from sub-cluster 2  issued from 
the first reboot of that sub-cluster
  # The containers have Ids low enough that the newly issued containers from 
RM1 clash
 
Makes sense?


was (Author: subru):
[~jianhe], to answer your question let me start with why we need epoch in a 
federated cluster:
currently only a single RM generates containerIDs (applicationID + a sequence 
number) but in a federated cluster, there are multiple RMs that are 
concurrently generating them. So there will be conflicts if an application 
spans across multiple sub-clusters. To avoid this conflict, we use epoch in a 
federated cluster similar to how it's used in the context of work preserving 
restarts to prevent conflicts.

The idea is we will set epoch number to be 0 for first sub-cluster RM, 1 
for second sub-cluster RM, 2 for third sub-cluster RM, etc. This should be 
sufficient as we have 1M epochs as they are represented as a 20bit integer. 
With this, there will be a conflict of containerIDs only if *all* of the below 
conditions are satisfied: 
  1) The RM of sub-cluster 1 is rebooted over 1 times 
  2) There is a running App the is still running (during over 10k reboots of 
one of the RMs)
  3) The app is run across sub-cluster 1 and sub-cluster 2
  4) The app is still holding onto containers from sub-cluster 2  issued from 
the first reboot of that sub-cluster
  5) The containers have Ids low enough that the newly issued containers from 
RM1 clash
 
Makes sense?

> Make the RM epoch base value configurable
> -
>
> Key: YARN-5601
> URL: https://issues.apache.org/jira/browse/YARN-5601
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-5601-YARN-2915-v1.patch
>
>
> Currently the epoch always starts from zero. This can cause container ids to 
> conflict for an application under Federation that spans multiple RMs 
> concurrently. This JIRA proposes to make the RM epoch base value configurable 
> which will allow us to avoid conflicts by setting different values for each 
> RM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5601) Make the RM epoch base value configurable

2016-08-31 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453991#comment-15453991
 ] 

Subru Krishnan commented on YARN-5601:
--

[~jianhe], to answer your question let me start with why we need epoch in a 
federated cluster:
currently only a single RM generates containerIDs (applicationID + a sequence 
number) but in a federated cluster, there are multiple RMs that are 
concurrently generating them. So there will be conflicts if an application 
spans across multiple sub-clusters. To avoid this conflict, we use epoch in a 
federated cluster similar to how it's used in the context of work preserving 
restarts to prevent conflicts.

The idea is we will set epoch number to be 0 for first sub-cluster RM, 1 
for second sub-cluster RM, 2 for third sub-cluster RM, etc. This should be 
sufficient as we have 1M epochs as they are represented as a 20bit integer. 
With this, there will be a conflict of containerIDs only if *all* of the below 
conditions are satisfied: 
  1) The RM of sub-cluster 1 is rebooted over 1 times 
  2) There is a running App the is still running (during over 10k reboots of 
one of the RMs)
  3) The app is run across sub-cluster 1 and sub-cluster 2
  4) The app is still holding onto containers from sub-cluster 2  issued from 
the first reboot of that sub-cluster
  5) The containers have Ids low enough that the newly issued containers from 
RM1 clash
 
Makes sense?

> Make the RM epoch base value configurable
> -
>
> Key: YARN-5601
> URL: https://issues.apache.org/jira/browse/YARN-5601
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-5601-YARN-2915-v1.patch
>
>
> Currently the epoch always starts from zero. This can cause container ids to 
> conflict for an application under Federation that spans multiple RMs 
> concurrently. This JIRA proposes to make the RM epoch base value configurable 
> which will allow us to avoid conflicts by setting different values for each 
> RM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4855) Should check if node exists when replace nodelabels

2016-08-31 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453986#comment-15453986
 ] 

Tao Jie commented on YARN-4855:
---

 refresh the patch and refine the checkstyle issues

> Should check if node exists when replace nodelabels
> ---
>
> Key: YARN-4855
> URL: https://issues.apache.org/jira/browse/YARN-4855
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-4855.001.patch, YARN-4855.002.patch, 
> YARN-4855.003.patch, YARN-4855.004.patch, YARN-4855.005.patch, 
> YARN-4855.006.patch
>
>
> Today when we add nodelabels to nodes, it would succeed even if nodes are not 
> existing NodeManger in cluster without any message.
> It could be like this:
> When we use *yarn rmadmin -replaceLabelsOnNode --fail-on-unkown-nodes 
> "node1=label1"* , it would be denied if node is unknown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4855) Should check if node exists when replace nodelabels

2016-08-31 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated YARN-4855:
--
Attachment: YARN-4855.006.patch

> Should check if node exists when replace nodelabels
> ---
>
> Key: YARN-4855
> URL: https://issues.apache.org/jira/browse/YARN-4855
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-4855.001.patch, YARN-4855.002.patch, 
> YARN-4855.003.patch, YARN-4855.004.patch, YARN-4855.005.patch, 
> YARN-4855.006.patch
>
>
> Today when we add nodelabels to nodes, it would succeed even if nodes are not 
> existing NodeManger in cluster without any message.
> It could be like this:
> When we use *yarn rmadmin -replaceLabelsOnNode --fail-on-unkown-nodes 
> "node1=label1"* , it would be denied if node is unknown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5323) Policies APIs (for Router and AMRMProxy policies)

2016-08-31 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453922#comment-15453922
 ] 

Giovanni Matteo Fumarola commented on YARN-5323:


Thanks [~curino] for the patch. 
Minor fixes:
1) In FederationPolicyInitializationContext, move private 
FederationStateStoreFacade federationStateStoreFacade; next to the other 
variables.
2) In FederationPolicyInitializationContext, add constructor.
3) Add test for FederationPolicyInitializationContextValidator (same as 
TestFederationStateStoreInputValidator). In this way we will fix also 
test4tests.

> Policies APIs (for Router and AMRMProxy policies)
> -
>
> Key: YARN-5323
> URL: https://issues.apache.org/jira/browse/YARN-5323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5323-YARN-2915.05.patch, 
> YARN-5323-YARN-2915.06.patch, YARN-5323-YARN-2915.07.patch, 
> YARN-5323.01.patch, YARN-5323.02.patch, YARN-5323.03.patch, YARN-5323.04.patch
>
>
> This JIRA tracks APIs for the policies that will guide the Router and 
> AMRMProxy decisions on where to fwd the jobs submission/query requests as 
> well as ResourceRequests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5331) Extend RLESparseResourceAllocation with period for supporting recurring reservations in YARN ReservationSystem

2016-08-31 Thread Sangeetha Abdu Jyothi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Abdu Jyothi updated YARN-5331:

Attachment: YARN-5331.001.patch

> Extend RLESparseResourceAllocation with period for supporting recurring 
> reservations in YARN ReservationSystem
> --
>
> Key: YARN-5331
> URL: https://issues.apache.org/jira/browse/YARN-5331
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Sangeetha Abdu Jyothi
> Attachments: YARN-5331.001.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to add a 
> PeriodicRLESparseResourceAllocation. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5331) Extend RLESparseResourceAllocation with period for supporting recurring reservations in YARN ReservationSystem

2016-08-31 Thread Sangeetha Abdu Jyothi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Abdu Jyothi updated YARN-5331:

Flags: Patch

> Extend RLESparseResourceAllocation with period for supporting recurring 
> reservations in YARN ReservationSystem
> --
>
> Key: YARN-5331
> URL: https://issues.apache.org/jira/browse/YARN-5331
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Sangeetha Abdu Jyothi
> Attachments: YARN-5331.001.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to add a 
> PeriodicRLESparseResourceAllocation. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5323) Policies APIs (for Router and AMRMProxy policies)

2016-08-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453834#comment-15453834
 ] 

Hadoop QA commented on YARN-5323:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
31s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
47s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 33s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 15s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826525/YARN-5323-YARN-2915.07.patch
 |
| JIRA Issue | YARN-5323 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6b59a1a54fe8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / c77269d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12979/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12979/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12979/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Policies APIs (for Router and AMRMProxy policies)
> 

[jira] [Commented] (YARN-4876) Decoupled Init / Destroy of Containers from Start / Stop

2016-08-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453803#comment-15453803
 ] 

Hadoop QA commented on YARN-4876:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 33s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 7s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 51s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 51s {color} 
| {color:red} root generated 1 new + 708 unchanged - 0 fixed = 709 total (was 
708) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 44s 
{color} | {color:red} root: The patch generated 161 new + 1256 unchanged - 11 
fixed = 1417 total (was 1267) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
38s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 85 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 53s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 
9 new + 123 unchanged - 0 fixed = 132 total (was 123) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 5 new + 242 unchanged - 0 fixed = 247 total (was 242) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client 
generated 4 new + 157 unchanged - 0 fixed = 161 total (was 157) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 25s {color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 18s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 14s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 34m 15s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 

[jira] [Commented] (YARN-4945) [Umbrella] Capacity Scheduler Preemption Within a queue

2016-08-31 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453773#comment-15453773
 ] 

Wangda Tan commented on YARN-4945:
--

Hi Sunil,

Thanks for update, I haven't dig into too many details of this patch yet, I 
majorly looked at interactions between preemptable-resource-calculator and 
candidate-selector.

Some comments so far:

1) What is intraQueuePreemptionCost? And what is AppPriorityComparator?

2) IntraQueuePreemptableResourceCalculator:
2.1) For the queue-level ideal allocation, is it enough to trust result from 
previous policies? PreemptableResourceCalculator saves per-queue ideal 
allocation to PCPP#queueToPartitions. I think we don't need add extra logic in 
the IntraQueuePreemptableResourceCalculator to resursively calculate it. 
Correct?

2.2 computeIntraQueuePreemptionDemand:
- {{if (tq.intraQueuePreemptionCalculationDone == true)}}, is this always false?
- Should we lock the queue inside {{for (LeafQueue leafQUeue : queues)}}?
- return value of getResourceDemandFromAppsPerQueue is not consumed by anyone.
- TempAppPerQueue is not used by anyone

Since the logic looks incompleted yet, here's some thoughts about 
implementation/overall-code-structure from my side, hope it may help you.

{code}
IntraQueuePreemptableResourceCalculator {
1. Uses ideal resource calculated by previous policies. 
2. Compute per-app ideal/preemptable resource according to per-queue 
policies. (Stored in TempAppPerPartition)
}

IntraQueueCandidateSelector {
1. Invoke IntraQueueCandidateSelector to calculate ideal/preemptable of 
apps
2. Use 
CapacitySchedulerPreemptionUtils.deductPreemptableResourcesBasedSelectedCandidates
 to deduct preemptable resource for already selected containers.

for (leafqueue from most underserved) {
for (apps in reserve order) {
if (app.preemptable > 0) {
// Select container logic.
}
}
}
}
{code}

> [Umbrella] Capacity Scheduler Preemption Within a queue
> ---
>
> Key: YARN-4945
> URL: https://issues.apache.org/jira/browse/YARN-4945
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
> Attachments: Intra-Queue Preemption Use Cases.pdf, 
> IntraQueuepreemption-CapacityScheduler (Design).pdf, YARN-2009-wip.2.patch, 
> YARN-2009-wip.patch
>
>
> This is umbrella ticket to track efforts of preemption within a queue to 
> support features like:
> YARN-2009. YARN-2113. YARN-4781.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5323) Policies APIs (for Router and AMRMProxy policies)

2016-08-31 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5323:
---
Attachment: YARN-5323-YARN-2915.07.patch

> Policies APIs (for Router and AMRMProxy policies)
> -
>
> Key: YARN-5323
> URL: https://issues.apache.org/jira/browse/YARN-5323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5323-YARN-2915.05.patch, 
> YARN-5323-YARN-2915.06.patch, YARN-5323-YARN-2915.07.patch, 
> YARN-5323.01.patch, YARN-5323.02.patch, YARN-5323.03.patch, YARN-5323.04.patch
>
>
> This JIRA tracks APIs for the policies that will guide the Router and 
> AMRMProxy decisions on where to fwd the jobs submission/query requests as 
> well as ResourceRequests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5323) Policies APIs (for Router and AMRMProxy policies)

2016-08-31 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453776#comment-15453776
 ] 

Carlo Curino commented on YARN-5323:


Actually, [~subru] convinced me about the Facade, and helped fixing the patch 
attached now (.07).

> Policies APIs (for Router and AMRMProxy policies)
> -
>
> Key: YARN-5323
> URL: https://issues.apache.org/jira/browse/YARN-5323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5323-YARN-2915.05.patch, 
> YARN-5323-YARN-2915.06.patch, YARN-5323-YARN-2915.07.patch, 
> YARN-5323.01.patch, YARN-5323.02.patch, YARN-5323.03.patch, YARN-5323.04.patch
>
>
> This JIRA tracks APIs for the policies that will guide the Router and 
> AMRMProxy decisions on where to fwd the jobs submission/query requests as 
> well as ResourceRequests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5549) AMLauncher.createAMContainerLaunchContext() should not log the command to be launched indiscriminately

2016-08-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453750#comment-15453750
 ] 

Hadoop QA commented on YARN-5549:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 16s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 19s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 34m 23s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 55s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826515/YARN-5549.005.patch |
| JIRA Issue | YARN-5549 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 371b3d2e146b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 85bab5f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12978/testReport/ |
| modules | C: 

[jira] [Commented] (YARN-5566) client-side NM graceful decom doesn't trigger when jobs finish

2016-08-31 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453742#comment-15453742
 ] 

Junping Du commented on YARN-5566:
--

bq. I'm not exactly sure why this is happening, but from what I can tell, this 
issue is based on some timing of when things occur, and somehow DECOMMISSIONING 
makes it more likely to happen.
[~kasha], can you hold on the commit given we are not 100% sure this fix is 
enough and side-effect? I will do more investigation and review today.

> client-side NM graceful decom doesn't trigger when jobs finish
> --
>
> Key: YARN-5566
> URL: https://issues.apache.org/jira/browse/YARN-5566
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-5566.001.patch, YARN-5566.002.patch, 
> YARN-5566.003.patch, YARN-5566.004.patch
>
>
> I was testing the client-side NM graceful decommission and noticed that it 
> was always waiting for the timeout, even if all jobs running on that node (or 
> even the cluster) had already finished.
> For example:
> # JobA is running with at least one container on NodeA
> # User runs client-side decom on NodeA at 5:00am with a timeout of 3 hours 
> --> NodeA enters DECOMMISSIONING state
> # JobA finishes at 6:00am and there are no other jobs running on NodeA
> # User's client reaches the timeout at 8:00am, and forcibly decommissions 
> NodeA
> NodeA should have decommissioned at 6:00am.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-08-31 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh reassigned YARN-5609:
-

Assignee: Arun Suresh

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> YARN-4876 allows an AM to explicitly *initialize*, *start*, *stop* and 
> *destroy* a {{Container}}.
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> # *upgrade* : which is a composition of *stop* + *(re)initialize* + *start*
> # *restart* : which is *stop* + *start*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-08-31 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-5609:
-

 Summary: Expose upgrade and restart API in 
ContainerManagementProtocol
 Key: YARN-5609
 URL: https://issues.apache.org/jira/browse/YARN-5609
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun Suresh


YARN-4876 allows an AM to explicitly *initialize*, *start*, *stop* and 
*destroy* a {{Container}}.

This JIRA proposes to extend the ContainerManagementProtocol with the following 
API:
# *upgrade* : which is a composition of *stop* + *(re)initialize* + *start*
# *restart* : which is *stop* + *start*




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5608) TestAMRMClient.setup() fails with ArrayOutOfBoundsException

2016-08-31 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-5608:
--

 Summary: TestAMRMClient.setup() fails with 
ArrayOutOfBoundsException
 Key: YARN-5608
 URL: https://issues.apache.org/jira/browse/YARN-5608
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Daniel Templeton


After 39 runs the {{TestAMRMClient}} test, I encountered:

{noformat}
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at 
org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.setup(TestAMRMClient.java:144)
{noformat}

I see it shows up occasionally in the error emails as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4875) Changes in NodeStatusUpdater and ResourceTrackerService to accommodate Allocations

2016-08-31 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4875:
--
Parent Issue: YARN-5593  (was: YARN-4726)

> Changes in NodeStatusUpdater and ResourceTrackerService to accommodate 
> Allocations
> --
>
> Key: YARN-4875
> URL: https://issues.apache.org/jira/browse/YARN-4875
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4876) Decoupled Init / Destroy of Containers from Start / Stop

2016-08-31 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4876:
--
Summary: Decoupled Init / Destroy of Containers from Start / Stop  (was: 
[Phase 1] Decoupled Init / Destroy of Containers from Start / Stop)

> Decoupled Init / Destroy of Containers from Start / Stop
> 
>
> Key: YARN-4876
> URL: https://issues.apache.org/jira/browse/YARN-4876
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Marco Rabozzi
> Attachments: YARN-4876-design-doc.pdf, YARN-4876.002.patch, 
> YARN-4876.003.patch, YARN-4876.004.patch, YARN-4876.01.patch
>
>
> Introduce *initialize* and *destroy* container API into the 
> *ContainerManagementProtocol* and decouple the actual start of a container 
> from the initialization. This will allow AMs to re-start a container without 
> having to lose the allocation.
> Additionally, if the localization of the container is associated to the 
> initialize (and the cleanup with the destroy), This can also be used by 
> applications to upgrade a Container by *re-initializing* with a new 
> *ContainerLaunchContext*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5264) Use FSQueue to store queue-specific information

2016-08-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453690#comment-15453690
 ] 

Hadoop QA commented on YARN-5264:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 6 new + 346 unchanged - 4 fixed = 352 total (was 350) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 37m 34s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 32s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826512/YARN-5264.007.patch |
| JIRA Issue | YARN-5264 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f6236f24be8d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 85bab5f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12977/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12977/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12977/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: 

[jira] [Updated] (YARN-4874) Changes in the AMRMClient to generate and track containers

2016-08-31 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4874:
--
Parent Issue: YARN-5593  (was: YARN-4726)

> Changes in the AMRMClient to generate and track containers
> --
>
> Key: YARN-4874
> URL: https://issues.apache.org/jira/browse/YARN-4874
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4997) Update fair scheduler to use pluggable auth provider

2016-08-31 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453640#comment-15453640
 ] 

Karthik Kambatla commented on YARN-4997:


Thanks for working on this, [~cassanada]. 

Few (mostly minor) comments on the latest patch:
# YarnAuthorizationProvider#destroy should be marked @VisibleForTesting.
# Noticed there is QueueACL is mapreduce code as well that can be dropped 
altogether? e.g. mapred QueueManager, many parts (all of?) QueueACL etc. Can we 
file a follow-up JIRA to drop all of that? 
# AllocationFileLoaderService:
## getDefaultPermissions: don't need to specify type when creating an arraylist 
for defaultPermissions.
## Listener is an interface. Don't need to specify visibility - public? 
# FairScheduler
## onReload: Is there a need to lock the scheduler when setting permissions? 
Would it be okay to limit the synchronized block to whatever was synchronized 
before? 
## Similarly, is there a reason to synchronize setQueueAcls?
## In setQueueAcls, we seem to initially set to default permissions and then 
"overwrite" it with final permissions. Is the first one necessary? I quickly 
looked at implementation of ConfiguredAuthorizationProvider, setPermission's 
semantics appear to be somewhere between append and overwrite. If it is append, 
may be we should change that name to addPermission? 


> Update fair scheduler to use pluggable auth provider
> 
>
> Key: YARN-4997
> URL: https://issues.apache.org/jira/browse/YARN-4997
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Tao Jie
> Attachments: YARN-4997-001.patch, YARN-4997-002.patch, 
> YARN-4997-003.patch, YARN-4997-004.patch, YARN-4997-005.patch, 
> YARN-4997-006.patch, YARN-4997-007.patch
>
>
> Now that YARN-3100 has made the authorization pluggable, it should be 
> supported by the fair scheduler.  YARN-3100 only updated the capacity 
> scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3854) Add localization support for docker images

2016-08-31 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453596#comment-15453596
 ] 

Daniel Templeton commented on YARN-3854:


Thanks, [~tangzhankun].  I just took a look through the proposal, and it looks 
good.  In the implementation section, you mention that you're solution for 
dealing with credentials is to assume that proper credentials exist.  Are you 
planning to validate or enforce that assumption?  If not, the implementation 
will have to deal with what happens when the credentials don't exist or aren't 
correct.

> Add localization support for docker images
> --
>
> Key: YARN-3854
> URL: https://issues.apache.org/jira/browse/YARN-3854
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Zhankun Tang
> Attachments: YARN-3854-branch-2.8.001.patch, 
> YARN-3854_Localization_support_for_Docker_image_v1.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v2.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v3.pdf
>
>
> We need the ability to localize docker images when those images aren't 
> already available locally. There are various approaches that could be used 
> here with different trade-offs/issues : image archives on HDFS + docker load 
> ,  docker pull during the localization phase or (automatic) docker pull 
> during the run/launch phase. 
> We also need the ability to clean-up old/stale, unused images. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5565) Capacity Scheduler not assigning value correctly.

2016-08-31 Thread gurmukh singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

gurmukh singh updated YARN-5565:

Environment: hadoop 2.7.2

> Capacity Scheduler not assigning value correctly.
> -
>
> Key: YARN-5565
> URL: https://issues.apache.org/jira/browse/YARN-5565
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.7.2
> Environment: hadoop 2.7.2
>Reporter: gurmukh singh
>  Labels: capacity-scheduler, scheduler
>
> Hi
> I was testing and found out that value assigned in the scheduler 
> configuration is not consistent with what ResourceManager is assigning.
> If i set the configuration as below and understand that it is java float, but 
> the rounding is not correct.
> capacity-sheduler.xml
> 
>   yarn.scheduler.capacity.q1.capacity
>   7.142857142857143
> 
> In Java:  System.err.println((7.142857142857143f)) ===> 7.142587 
> But, instead Resource Manager is assigning is 7.1428566
> Tested this on hadoop 2.7.2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5565) Capacity Scheduler not assigning value correctly.

2016-08-31 Thread gurmukh singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

gurmukh singh updated YARN-5565:

Labels: capacity-scheduler scheduler  (was: capacity-scheduler)

> Capacity Scheduler not assigning value correctly.
> -
>
> Key: YARN-5565
> URL: https://issues.apache.org/jira/browse/YARN-5565
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.7.2
>Reporter: gurmukh singh
>  Labels: capacity-scheduler, scheduler
>
> Hi
> I was testing and found out that value assigned in the scheduler 
> configuration is not consistent with what ResourceManager is assigning.
> If i set the configuration as below and understand that it is java float, but 
> the rounding is not correct.
> capacity-sheduler.xml
> 
>   yarn.scheduler.capacity.q1.capacity
>   7.142857142857143
> 
> In Java:  System.err.println((7.142857142857143f)) ===> 7.142587 
> But, instead Resource Manager is assigning is 7.1428566
> Tested this on hadoop 2.7.2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5549) AMLauncher.createAMContainerLaunchContext() should not log the command to be launched indiscriminately

2016-08-31 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5549:
---
Attachment: YARN-5549.005.patch

OK.  Here's a patch based on the previous patch 3 with [~kasha]'s feedback 
applied.

> AMLauncher.createAMContainerLaunchContext() should not log the command to be 
> launched indiscriminately
> --
>
> Key: YARN-5549
> URL: https://issues.apache.org/jira/browse/YARN-5549
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5549.001.patch, YARN-5549.002.patch, 
> YARN-5549.003.patch, YARN-5549.004.patch, YARN-5549.005.patch
>
>
> The command could contain sensitive information, such as keystore passwords 
> or AWS credentials or other.  Instead of logging it as INFO, we should log it 
> as DEBUG and include a property to disable logging it at all.  Logging it to 
> a different logger would also be viable and may create a smaller 
> administrative footprint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5565) Capacity Scheduler not assigning value correctly.

2016-08-31 Thread gurmukh singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

gurmukh singh updated YARN-5565:

Labels: capacity-scheduler  (was: )

> Capacity Scheduler not assigning value correctly.
> -
>
> Key: YARN-5565
> URL: https://issues.apache.org/jira/browse/YARN-5565
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.7.2
> Environment: Centos 6.7
>Reporter: gurmukh singh
>  Labels: capacity-scheduler
>
> Hi
> I was testing and found out that value assigned in the scheduler 
> configuration is not consistent with what ResourceManager is assigning.
> If i set the configuration as below and understand that it is java float, but 
> the rounding is not correct.
> capacity-sheduler.xml
> 
>   yarn.scheduler.capacity.q1.capacity
>   7.142857142857143
> 
> In Java:  System.err.println((7.142857142857143f)) ===> 7.142587 
> But, instead Resource Manager is assigning is 7.1428566
> Tested this on hadoop 2.7.2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5565) Capacity Scheduler not assigning value correctly.

2016-08-31 Thread gurmukh singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

gurmukh singh updated YARN-5565:

Environment: (was: Centos 6.7)

> Capacity Scheduler not assigning value correctly.
> -
>
> Key: YARN-5565
> URL: https://issues.apache.org/jira/browse/YARN-5565
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.7.2
>Reporter: gurmukh singh
>  Labels: capacity-scheduler
>
> Hi
> I was testing and found out that value assigned in the scheduler 
> configuration is not consistent with what ResourceManager is assigning.
> If i set the configuration as below and understand that it is java float, but 
> the rounding is not correct.
> capacity-sheduler.xml
> 
>   yarn.scheduler.capacity.q1.capacity
>   7.142857142857143
> 
> In Java:  System.err.println((7.142857142857143f)) ===> 7.142587 
> But, instead Resource Manager is assigning is 7.1428566
> Tested this on hadoop 2.7.2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5549) AMLauncher.createAMContainerLaunchContext() should not log the command to be launched indiscriminately

2016-08-31 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453549#comment-15453549
 ] 

Karthik Kambatla commented on YARN-5549:


I hate introducing one more config, but looks like it is required here. If the 
admin turns on debug logging to debug problems, user's credentials are exposed. 
As long as we are going to drop the config along with the log line in one of 
these follow-up JIRAs, I am fine with including the config for now. However, 
for security reasons, the default should probably be OFF. It would help to 
mention the config along with REDACTED.

> AMLauncher.createAMContainerLaunchContext() should not log the command to be 
> launched indiscriminately
> --
>
> Key: YARN-5549
> URL: https://issues.apache.org/jira/browse/YARN-5549
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5549.001.patch, YARN-5549.002.patch, 
> YARN-5549.003.patch, YARN-5549.004.patch
>
>
> The command could contain sensitive information, such as keystore passwords 
> or AWS credentials or other.  Instead of logging it as INFO, we should log it 
> as DEBUG and include a property to disable logging it at all.  Logging it to 
> a different logger would also be viable and may create a smaller 
> administrative footprint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5264) Use FSQueue to store queue-specific information

2016-08-31 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5264:
---
Attachment: YARN-5264.007.patch

> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: https://issues.apache.org/jira/browse/YARN-5264
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5264.001.patch, YARN-5264.002.patch, 
> YARN-5264.003.patch, YARN-5264.004.patch, YARN-5264.005.patch, 
> YARN-5264.006.patch, YARN-5264.007.patch
>
>
> Use FSQueue to store queue-specific information instead of querying 
> AllocationConfiguration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5264) Use FSQueue to store queue-specific information

2016-08-31 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5264:
---
Attachment: YARN-5264.007.patch

Thanks [~kasha] for the review. I've upload patch 007 for your comment. 

> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: https://issues.apache.org/jira/browse/YARN-5264
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5264.001.patch, YARN-5264.002.patch, 
> YARN-5264.003.patch, YARN-5264.004.patch, YARN-5264.005.patch, 
> YARN-5264.006.patch
>
>
> Use FSQueue to store queue-specific information instead of querying 
> AllocationConfiguration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5264) Use FSQueue to store queue-specific information

2016-08-31 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5264:
---
Attachment: (was: YARN-5264.007.patch)

> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: https://issues.apache.org/jira/browse/YARN-5264
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5264.001.patch, YARN-5264.002.patch, 
> YARN-5264.003.patch, YARN-5264.004.patch, YARN-5264.005.patch, 
> YARN-5264.006.patch
>
>
> Use FSQueue to store queue-specific information instead of querying 
> AllocationConfiguration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5264) Use FSQueue to store queue-specific information

2016-08-31 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453518#comment-15453518
 ] 

Karthik Kambatla commented on YARN-5264:


Nice! The code is so much cleaner. Thanks Yufei for working on this, and Daniel 
for the careful reviews. 

Just one nit: getMaxAMShare is not used anywhere. Drop it? 

> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: https://issues.apache.org/jira/browse/YARN-5264
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5264.001.patch, YARN-5264.002.patch, 
> YARN-5264.003.patch, YARN-5264.004.patch, YARN-5264.005.patch, 
> YARN-5264.006.patch
>
>
> Use FSQueue to store queue-specific information instead of querying 
> AllocationConfiguration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5264) Use FSQueue to store queue-specific information

2016-08-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453503#comment-15453503
 ] 

Hadoop QA commented on YARN-5264:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 6 new + 346 unchanged - 4 fixed = 352 total (was 350) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 26s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 25s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826497/YARN-5264.006.patch |
| JIRA Issue | YARN-5264 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 184c43159eae 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 01721dd |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12975/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12975/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12975/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: 

[jira] [Commented] (YARN-5323) Policies APIs (for Router and AMRMProxy policies)

2016-08-31 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453483#comment-15453483
 ] 

Carlo Curino commented on YARN-5323:


[~giovanni.fumarola] and [~subru] thanks for your comments. 

I addressed all of them expect [~subru]'s FederationStateStoreFacade one. 
The rationale for leaving the API as is, is that it makes it evident to the 
implementor that 
the set of active sub-cluster could be changing over time, and should be handle 
with care. 
This way the code is (not completely, but close to) functional, which I like 
for the policies.

 



> Policies APIs (for Router and AMRMProxy policies)
> -
>
> Key: YARN-5323
> URL: https://issues.apache.org/jira/browse/YARN-5323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5323-YARN-2915.05.patch, 
> YARN-5323-YARN-2915.06.patch, YARN-5323.01.patch, YARN-5323.02.patch, 
> YARN-5323.03.patch, YARN-5323.04.patch
>
>
> This JIRA tracks APIs for the policies that will guide the Router and 
> AMRMProxy decisions on where to fwd the jobs submission/query requests as 
> well as ResourceRequests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4876) [Phase 1] Decoupled Init / Destroy of Containers from Start / Stop

2016-08-31 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4876:
--
Attachment: YARN-4876.004.patch

Rebasing and updating again..

> [Phase 1] Decoupled Init / Destroy of Containers from Start / Stop
> --
>
> Key: YARN-4876
> URL: https://issues.apache.org/jira/browse/YARN-4876
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Marco Rabozzi
> Attachments: YARN-4876-design-doc.pdf, YARN-4876.002.patch, 
> YARN-4876.003.patch, YARN-4876.004.patch, YARN-4876.01.patch
>
>
> Introduce *initialize* and *destroy* container API into the 
> *ContainerManagementProtocol* and decouple the actual start of a container 
> from the initialization. This will allow AMs to re-start a container without 
> having to lose the allocation.
> Additionally, if the localization of the container is associated to the 
> initialize (and the cleanup with the destroy), This can also be used by 
> applications to upgrade a Container by *re-initializing* with a new 
> *ContainerLaunchContext*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4876) [Phase 1] Decoupled Init / Destroy of Containers from Start / Stop

2016-08-31 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4876:
--
Attachment: (was: YARN-4876.004.patch)

> [Phase 1] Decoupled Init / Destroy of Containers from Start / Stop
> --
>
> Key: YARN-4876
> URL: https://issues.apache.org/jira/browse/YARN-4876
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Marco Rabozzi
> Attachments: YARN-4876-design-doc.pdf, YARN-4876.002.patch, 
> YARN-4876.003.patch, YARN-4876.01.patch
>
>
> Introduce *initialize* and *destroy* container API into the 
> *ContainerManagementProtocol* and decouple the actual start of a container 
> from the initialization. This will allow AMs to re-start a container without 
> having to lose the allocation.
> Additionally, if the localization of the container is associated to the 
> initialize (and the cleanup with the destroy), This can also be used by 
> applications to upgrade a Container by *re-initializing* with a new 
> *ContainerLaunchContext*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4876) [Phase 1] Decoupled Init / Destroy of Containers from Start / Stop

2016-08-31 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4876:
--
Attachment: (was: YARN-4876.004.patch)

> [Phase 1] Decoupled Init / Destroy of Containers from Start / Stop
> --
>
> Key: YARN-4876
> URL: https://issues.apache.org/jira/browse/YARN-4876
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Marco Rabozzi
> Attachments: YARN-4876-design-doc.pdf, YARN-4876.002.patch, 
> YARN-4876.003.patch, YARN-4876.01.patch
>
>
> Introduce *initialize* and *destroy* container API into the 
> *ContainerManagementProtocol* and decouple the actual start of a container 
> from the initialization. This will allow AMs to re-start a container without 
> having to lose the allocation.
> Additionally, if the localization of the container is associated to the 
> initialize (and the cleanup with the destroy), This can also be used by 
> applications to upgrade a Container by *re-initializing* with a new 
> *ContainerLaunchContext*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-679) add an entry point that can start any Yarn service

2016-08-31 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453458#comment-15453458
 ] 

Daniel Templeton commented on YARN-679:
---

Continuing:

* The hyphen should be "that": {code} * Handler of interrupts -relays them to a 
registered{code}
* This: {code}   * Run a service -called after {@link Service#start()}.{code} 
should be {code}+   * Run a service. This method is called after {@link 
Service#start()}.{code}
* The period should be a colon here: {code}   *   Any other exception. A 
new {@link ServiceLaunchException} is created{code}
* This hyphen should be a dash or a comma: {code} * the GenericOptionsParser 
-simply extracted to constants.{code}
* Same here: {code}   * except in the case that the class does load but it 
isn't actually{code}
* The {{@value }} tags in the {{LauncherArguments.ARG_CONFCLASS}} and 
{{LauncherArguments. E_PARSE_FAILED}} javadocs are just kinda dangling out 
there, not really adding anything--except maybe confusion.
* The javadoc here: {code}LauncherExitCodes.EXIT_FAIL{code} doesn't match the 
pattern for the rest of the class' constants.
* The 40x/50x comments in the {{LauncherExitCodes}} class' constants' javadoc 
need a little context around them.  Otherwise it just seems like technical 
Tourettes.
* The hyphen here should be a dash: {code} *   If any exception is raised 
and provides an exit code
 *   -that is, it implements {@link ExitCodeProvider},{code}
* "configurations" should be possessive, not plural: {code} * is wrong and 
logger configurations not on it, then no error messages by{code}
* In {{ServiceLauncher.launchServiceAndExit()}}, this bit {code}for (String 
arg : args) {
  builder.append('"').append(arg).append("\" ");
}{code} could be pulling out into another method and run lazily since it's 
only needed in exceptional cases.  You could also reuse it in 
{{parseCommandArgs()}}.
* The first sentence of a javadoc header should summarize the content.  This 
one doesn't: {code}  /**
   * An exception has been raised.
   * Save it to {@link #serviceException}, with the exit code in
   * {@link #serviceExitCode}
   * @param exitException exception
   */{code}
* Remove the period: {code}   * @return the new options.{code}
* The hyphen here should be a dash or "because": {code}  + "- it is 
not a Configuration class/subclass");{code}
* The hyphen here should be a period or semicolon: {code}
LOG.debug("Failed to load {} -it is not on the classpath", classname);{code}
* "LaunchedService" should be "LaunchableService": {code}  // it's a 
launchedService, pass in the conf and arguments before init)
  LOG.debug("Service {} implements LaunchedService", name);
  launchableService = (LaunchableService) service;
  if (launchableService.isInState(Service.STATE.INITED)) {
LOG.warn("LaunchedService {}" 
{code}
* Won't this lead to confusing exceptions with stack traces as their messages 
followed by their own stack traces? {code}  if (message == null) {
// some exceptions do not have a message; fall back
// to the string value.
message = thrown.toString();
  }{code}
* The javadoc for {{ServiceLauncher. registerFailureHandling()}} omits that it 
also registers a handler for SIGTERM.
* The "Override point:" note seems to be missing from many of the 
{{ServiceLauncher}} methods' javadoc.
* This should be in the check to see if logging is enabled: {code}
LOG.debug("Command line: {}", argString);{code}


I have to be honest; my eyes glazed over by the end, and I'm sure the quality 
of my review suffered.  The only thing that kept me going was the comforting 
thought that you'll have as much fun sorting through my litany of comments as I 
did digging through all that code.

Would it be possible to break this down into a few smaller patches?  It would 
help tremendously in getting it reviewed.

> add an entry point that can start any Yarn service
> --
>
> Key: YARN-679
> URL: https://issues.apache.org/jira/browse/YARN-679
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-679-001.patch, YARN-679-002.patch, 
> YARN-679-002.patch, YARN-679-003.patch, YARN-679-004.patch, 
> YARN-679-005.patch, YARN-679-006.patch, YARN-679-007.patch, 
> YARN-679-008.patch, YARN-679-009.patch, YARN-679-010.patch, 
> YARN-679-011.patch, org.apache.hadoop.servic...mon 3.0.0-SNAPSHOT API).pdf
>
>  Time Spent: 72h
>  Remaining Estimate: 0h
>
> There's no need to write separate .main classes for every Yarn service, given 
> that the startup mechanism should be identical: create, init, start, wait for 
> stopped -with an interrupt handler to trigger a 

[jira] [Commented] (YARN-5323) Policies APIs (for Router and AMRMProxy policies)

2016-08-31 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453381#comment-15453381
 ] 

Subru Krishnan commented on YARN-5323:
--

Thanks [~curino] for working on this. The latest patch mostly LGTM, have a few 
minor comments:
  * I feel it would be better to have {{FederationStateStoreFacade}} in 
{{FederationPolicyInitializationContext}} and use that instead of passing 
active sub-cluster map in every invocation, that too of both 
_Router/AMRMProxyFederationPolicy_.
  * There are few public methods missing Javadocs like getters/setters in 
{{FederationPolicyInitializationContext}}.
  *  IMO few of the open Yetus checkstyle/javadoc warnings are fixable.

> Policies APIs (for Router and AMRMProxy policies)
> -
>
> Key: YARN-5323
> URL: https://issues.apache.org/jira/browse/YARN-5323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5323-YARN-2915.05.patch, 
> YARN-5323-YARN-2915.06.patch, YARN-5323.01.patch, YARN-5323.02.patch, 
> YARN-5323.03.patch, YARN-5323.04.patch
>
>
> This JIRA tracks APIs for the policies that will guide the Router and 
> AMRMProxy decisions on where to fwd the jobs submission/query requests as 
> well as ResourceRequests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5563) Add log messages for jobs in ACCEPTED state but not runnable.

2016-08-31 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5563:
---
Description: 
Leaf queues maintain a list of runnable and non-runnable apps. FairScheduler 
marks an app non-runnable for different reasons: exceeding the following 
properties of the leaf queue:
(1) queue max apps, 
(2) user max apps, 
(3) queue maxResources, 
(4) maxAMShare. 

It would be nice to log the reason an app isn't runnable. The first three are 
easy to infer, but the last one (maxAMShare) is particularly hard. We are going 
to log all of them and show the reason if any in WebUI application view.


  was:
Leaf queues maintain a list of runnable and non-runnable apps. FairScheduler 
marks an app non-runnable for different reasons: exceeding (1) queue max apps, 
(2) user max apps, (3) queue maxResources, (4) maxAMShare. It would be nice to 
log the reason an app isn't runnable. The first three are easy to infer, but 
the last one (maxAMShare) is particularly hard. It would be nice to log at 
least that.



> Add log messages for jobs in ACCEPTED state but not runnable.
> -
>
> Key: YARN-5563
> URL: https://issues.apache.org/jira/browse/YARN-5563
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: supportability
>
> Leaf queues maintain a list of runnable and non-runnable apps. FairScheduler 
> marks an app non-runnable for different reasons: exceeding the following 
> properties of the leaf queue:
> (1) queue max apps, 
> (2) user max apps, 
> (3) queue maxResources, 
> (4) maxAMShare. 
> It would be nice to log the reason an app isn't runnable. The first three are 
> easy to infer, but the last one (maxAMShare) is particularly hard. We are 
> going to log all of them and show the reason if any in WebUI application view.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5264) Use FSQueue to store queue-specific information

2016-08-31 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453353#comment-15453353
 ] 

Yufei Gu commented on YARN-5264:


The patch 006 fixed the last comment. Thanks a lot for the detailed review, 
[~templedf]!

> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: https://issues.apache.org/jira/browse/YARN-5264
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5264.001.patch, YARN-5264.002.patch, 
> YARN-5264.003.patch, YARN-5264.004.patch, YARN-5264.005.patch, 
> YARN-5264.006.patch
>
>
> Use FSQueue to store queue-specific information instead of querying 
> AllocationConfiguration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5605) Preempt containers (all on one node) to meet the requirement of starved applications

2016-08-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453350#comment-15453350
 ] 

ASF GitHub Bot commented on YARN-5605:
--

GitHub user kambatla opened a pull request:

https://github.com/apache/hadoop/pull/124

YARN-5605. Preempt containers (all on one node) to meet the requirement of 
starved applications.

- Remove existing preemption code.
- Deleted preemption tests, to be reinstated later.
- Identify starved applications in the update thread when preemption is 
enabled.
- For each starved application, identify containers to be preempted on a 
single node to make room for this request.
- Preempt these containers after necessary warning.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kambatla/hadoop yarn-5065

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/124.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #124


commit 63804f783cc398479be6f9fe69e9db8ea9279047
Author: Karthik Kambatla 
Date:   2016-07-25T00:21:28Z

Remove existing preemption code.
Deleted preemption tests, to be reinstated later.
Identify starved applications in the update thread when preemption is 
enabled.
For each starved application, identify containers to be preempted on a 
single node to make room for this request.
Preempt these containers after necessary warning.




> Preempt containers (all on one node) to meet the requirement of starved 
> applications
> 
>
> Key: YARN-5605
> URL: https://issues.apache.org/jira/browse/YARN-5605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-5605-1.patch
>
>
> Required items:
> # Identify starved applications
> # Identify a node that has enough containers from applications over their 
> fairshare.
> # Preempt those containers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5264) Use FSQueue to store queue-specific information

2016-08-31 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453346#comment-15453346
 ] 

Daniel Templeton commented on YARN-5264:


Looks good to me.  +1 (non-binding)

> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: https://issues.apache.org/jira/browse/YARN-5264
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5264.001.patch, YARN-5264.002.patch, 
> YARN-5264.003.patch, YARN-5264.004.patch, YARN-5264.005.patch, 
> YARN-5264.006.patch
>
>
> Use FSQueue to store queue-specific information instead of querying 
> AllocationConfiguration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5264) Use FSQueue to store queue-specific information

2016-08-31 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5264:
---
Attachment: YARN-5264.006.patch

> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: https://issues.apache.org/jira/browse/YARN-5264
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5264.001.patch, YARN-5264.002.patch, 
> YARN-5264.003.patch, YARN-5264.004.patch, YARN-5264.005.patch, 
> YARN-5264.006.patch
>
>
> Use FSQueue to store queue-specific information instead of querying 
> AllocationConfiguration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-08-31 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453314#comment-15453314
 ] 

Li Lu commented on YARN-5585:
-

If there are two flow runs running, I believe the problem is how to define the 
meaning of "fromId". This appears to be something requires working with 
"aggregated" data on one flow, instead of directly working on data with 
hierarchical order. IIUC the ultimate goal in this JIRA is to support 
pagination, so I think it might be helpful to fully understand important use 
cases here. 

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-10. But to retrieve next 5 apps, it is 
> difficult.
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-08-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453305#comment-15453305
 ] 

Hadoop QA commented on YARN-5545:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 6 new + 123 unchanged - 2 fixed = 129 total (was 125) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m 39s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 16s 
{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 32s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationLimits |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826489/YARN-5545.0001.patch |
| JIRA Issue | YARN-5545 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3628e2179fc0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 01721dd |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12973/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12973/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12973/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12973/testReport/ |
| asflicense | 

[jira] [Commented] (YARN-5264) Use FSQueue to store queue-specific information

2016-08-31 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453283#comment-15453283
 ] 

Daniel Templeton commented on YARN-5264:


Darn it, I found something I had missed before.  In {{createNewQueues()}}, you 
have:

{code}
   if (!i.hasNext() && (queueType != FSQueueType.PARENT)) {
 FSLeafQueue leafQueue = new FSLeafQueue(queueName, scheduler, parent);
 leafQueue.init();
 leafQueues.add(leafQueue);
 queue = leafQueue;
   } else {
 newParent = new FSParentQueue(queueName, scheduler, parent);
 newParent.init();
 queue = newParent;
   }
{code}

The {{init()}} calls should be pulled out of the _if_ and become 
{{queue.init()}}, e.g.

{code}
   if (!i.hasNext() && (queueType != FSQueueType.PARENT)) {
 FSLeafQueue leafQueue = new FSLeafQueue(queueName, scheduler, parent);
 leafQueues.add(leafQueue);
 queue = leafQueue;
   } else {
 queue = new FSParentQueue(queueName, scheduler, parent);
   }

   queue.init();
{code}


> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: https://issues.apache.org/jira/browse/YARN-5264
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5264.001.patch, YARN-5264.002.patch, 
> YARN-5264.003.patch, YARN-5264.004.patch, YARN-5264.005.patch
>
>
> Use FSQueue to store queue-specific information instead of querying 
> AllocationConfiguration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5264) Use FSQueue to store queue-specific information

2016-08-31 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453274#comment-15453274
 ] 

Yufei Gu commented on YARN-5264:


The check style issues are OK to let alone.

> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: https://issues.apache.org/jira/browse/YARN-5264
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5264.001.patch, YARN-5264.002.patch, 
> YARN-5264.003.patch, YARN-5264.004.patch, YARN-5264.005.patch
>
>
> Use FSQueue to store queue-specific information instead of querying 
> AllocationConfiguration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5505) Create an agent-less docker provider in the native-services framework

2016-08-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453249#comment-15453249
 ] 

Hadoop QA commented on YARN-5505:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
24s {color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
33s {color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 56s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 in yarn-native-services has 314 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 27s 
{color} | {color:red} hadoop-yarn-slider-core in yarn-native-services failed. 
{color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 24s {color} 
| {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core
 generated 2 new + 34 unchanged - 2 fixed = 36 total (was 36) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 33s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core:
 The patch generated 47 new + 1086 unchanged - 94 fixed = 1133 total (was 1180) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 14s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 generated 7 new + 310 unchanged - 4 fixed = 317 total (was 314) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 27s 
{color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 27s 
{color} | {color:green} hadoop-yarn-slider-core in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 19s 
{color} | {color:red} The patch generated 10 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 14s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 |
|  |  Load of known null value in 
org.apache.slider.providers.ProviderUtils.localizeConfigFiles(ContainerLauncher,
 String, String, ConfTreeOperations, Map, MapOperations, SliderFileSystem, 
String)  At ProviderUtils.java:in 
org.apache.slider.providers.ProviderUtils.localizeConfigFiles(ContainerLauncher,
 String, String, ConfTreeOperations, Map, MapOperations, SliderFileSystem, 
String)  At ProviderUtils.java:[line 608] |
|  |  Possible null pointer dereference of configValue in 
org.apache.slider.providers.ProviderUtils.dereferenceAllConfigs(Map)  
Dereferenced at ProviderUtils.java:configValue in 

[jira] [Commented] (YARN-5598) [YARN-3368] Fix create-release to be able to generate bits for the new yarn-ui

2016-08-31 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453243#comment-15453243
 ] 

Wangda Tan commented on YARN-5598:
--

Thanks [~sunilg], 

Actually what we need to check is artifacts after create-release is good. It 
will be placed under artifacts/. We don't have to deploy it in docker 
environment. For me, I just copied binary artifacts and deploy in my OSX 
environment.

> [YARN-3368] Fix create-release to be able to generate bits for the new yarn-ui
> --
>
> Key: YARN-5598
> URL: https://issues.apache.org/jira/browse/YARN-5598
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn, yarn-ui-v2
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5598-YARN-3368.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5554) MoveApplicationAcrossQueues does not check user permission on the target queue

2016-08-31 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453039#comment-15453039
 ] 

Yufei Gu edited comment on YARN-5554 at 8/31/16 8:13 PM:
-

Thanks [~wilfreds] for working on this. The patch looks good to me generally. 
My one thought is that {{moveApplicationAcrossQueues}} doesn't check if the 
queue exists before ACL checking, neither do its callers. It's OK since its 
callee like {{queueACLsManager.checkAccess}} did the NULL check of the target 
queue, but the LOG information seems vague. What about to do the NULL check in 
{{moveApplicationAcrossQueues}} and provide the explicitly information of 
"target queue does't exist" and remove the NULL check in its callee like 
function {{queueACLsManager.checkAccess}} added in this patch.


was (Author: yufeigu):
Thanks [~wilfreds] for working on this. The patch looks good to me generally. 
My one thought is that {{moveApplicationAcrossQueues}} doesn't check if the 
queue exists first, neither do its callers. It's OK since its callee like 
{{queueACLsManager.checkAccess}} did the NULL check of the target queue, but 
the LOG information seems vague. What about to do the NULL check in 
{{moveApplicationAcrossQueues}} and provide the explicitly information of 
"target queue does't exist" and remove the NULL check in its callee like 
function {{queueACLsManager.checkAccess}} added in this patch.

> MoveApplicationAcrossQueues does not check user permission on the target queue
> --
>
> Key: YARN-5554
> URL: https://issues.apache.org/jira/browse/YARN-5554
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Haibo Chen
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-5554.2.patch, YARN-5554.3.patch
>
>
> moveApplicationAcrossQueues operation currently does not check user 
> permission on the target queue. This incorrectly allows one user to move 
> his/her own applications to a queue that the user has no access to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5554) MoveApplicationAcrossQueues does not check user permission on the target queue

2016-08-31 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453039#comment-15453039
 ] 

Yufei Gu edited comment on YARN-5554 at 8/31/16 8:12 PM:
-

Thanks [~wilfreds] for working on this. The patch looks good to me generally. 
My one concern is that {{moveApplicationAcrossQueues}} doesn't check if the 
queue exists first, neither do its callers. It's OK since its callee like 
{{queueACLsManager.checkAccess}} did the NULL check of the target queue, but 
the LOG information seems vague. What about to do the NULL check in 
{{moveApplicationAcrossQueues}} and provide the explicitly information of 
"target queue does't exist" and remove the NULL check in its callee like 
function {{queueACLsManager.checkAccess}} added in this patch.


was (Author: yufeigu):
Thanks [~wilfreds] for working on this. The patch looks good to me generally. 
My one concern is that {{moveApplicationAcrossQueues}} doesn't check if the 
queue exists first, neither do its callers. I can see it's OK its callee like 
{{queueACLsManager.checkAccess}} did the NULL check of the target queue but the 
LOG information seems vague. What about to do the NULL check in 
{{moveApplicationAcrossQueues}} and provide the explicitly information of 
"target queue does't exist" and remove the NULL check in its callee like 
function {{queueACLsManager.checkAccess}} added in this patch.

> MoveApplicationAcrossQueues does not check user permission on the target queue
> --
>
> Key: YARN-5554
> URL: https://issues.apache.org/jira/browse/YARN-5554
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Haibo Chen
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-5554.2.patch, YARN-5554.3.patch
>
>
> moveApplicationAcrossQueues operation currently does not check user 
> permission on the target queue. This incorrectly allows one user to move 
> his/her own applications to a queue that the user has no access to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5554) MoveApplicationAcrossQueues does not check user permission on the target queue

2016-08-31 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453039#comment-15453039
 ] 

Yufei Gu edited comment on YARN-5554 at 8/31/16 8:13 PM:
-

Thanks [~wilfreds] for working on this. The patch looks good to me generally. 
My one thought is that {{moveApplicationAcrossQueues}} doesn't check if the 
queue exists first, neither do its callers. It's OK since its callee like 
{{queueACLsManager.checkAccess}} did the NULL check of the target queue, but 
the LOG information seems vague. What about to do the NULL check in 
{{moveApplicationAcrossQueues}} and provide the explicitly information of 
"target queue does't exist" and remove the NULL check in its callee like 
function {{queueACLsManager.checkAccess}} added in this patch.


was (Author: yufeigu):
Thanks [~wilfreds] for working on this. The patch looks good to me generally. 
My one concern is that {{moveApplicationAcrossQueues}} doesn't check if the 
queue exists first, neither do its callers. It's OK since its callee like 
{{queueACLsManager.checkAccess}} did the NULL check of the target queue, but 
the LOG information seems vague. What about to do the NULL check in 
{{moveApplicationAcrossQueues}} and provide the explicitly information of 
"target queue does't exist" and remove the NULL check in its callee like 
function {{queueACLsManager.checkAccess}} added in this patch.

> MoveApplicationAcrossQueues does not check user permission on the target queue
> --
>
> Key: YARN-5554
> URL: https://issues.apache.org/jira/browse/YARN-5554
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Haibo Chen
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-5554.2.patch, YARN-5554.3.patch
>
>
> moveApplicationAcrossQueues operation currently does not check user 
> permission on the target queue. This incorrectly allows one user to move 
> his/her own applications to a queue that the user has no access to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5505) Create an agent-less docker provider in the native-services framework

2016-08-31 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-5505:
-
Attachment: YARN-5505-yarn-native-services.002.patch

[~jianhe], thank you for the comments. I have attached a new patch to address 
them. Let me know if you have any other suggestions.

> Create an agent-less docker provider in the native-services framework
> -
>
> Key: YARN-5505
> URL: https://issues.apache.org/jira/browse/YARN-5505
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Attachments: YARN-5505-yarn-native-services.001.patch, 
> YARN-5505-yarn-native-services.002.patch
>
>
> The Slider AM has a pluggable portion called a provider. Currently the only 
> provider implementation is the agent provider which contains the bulk of the 
> agent-related Java code. We can implement a docker provider that does not use 
> the agent and gets information it needs directly from the NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-08-31 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453163#comment-15453163
 ] 

Bibin A Chundatt edited comment on YARN-5545 at 8/31/16 7:45 PM:
-

[~sunilg]/ [~Naganarasimha Garla]
# Solution based in resource usage is having issue during startup when none of 
the node managers are registered.Resource will be zero and application can get 
rejected.
# Attaching patch based on {{yarn.scheduler.capacity.maximum-applications}} to 
be run on partition for all queues. For each label we can configure as 
{{yarn.scheduler.capacity.maximum-applications.accessible-node-labels.}}
# During application limit check will considered applications than run on 
complete cluster for  leaf queue (of all partitions).
# When property is not configured default value of 
{{yarn.scheduler.capacity.maximum-applications}} is considered for partition 
queues

If max-application for queue is configured then 
{{yarn.scheduler.capacity.maximum-applications}} will not be considered.

Attaching first patch for the same.


was (Author: bibinchundatt):
[~sunilg]/ [~Naganarasimha Garla]
# Solution based in resource usage is having issue during startup when none of 
the node managers are registered.Resource will be zero and application can get 
rejected.
# Attaching patch based on max-applications to be run on partition for all 
queues. For each label we can configure as 
{{yarn.scheduler.capacity.maximum-applications.accessible-node-labels.}}
# During application limit check will considered applications than run on 
completed cluster for a leaf queue (of all partitions).
# When property is not configured default value of 
{{yarn.scheduler.capacity.maximum-applications}} is considered.

Attaching first patch for the same.

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5545.0001.patch, capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> 

[jira] [Updated] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-08-31 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5545:
---
Attachment: YARN-5545.0001.patch

[~sunilg]/ [~Naganarasimha Garla]
# Solution based in resource usage is having issue during startup when none of 
the node managers are registered.Resource will be zero and application can get 
rejected.
# Attaching patch based on max-applications to be run on partition for all 
queues. For each label we can configure as 
{{yarn.scheduler.capacity.maximum-applications.accessible-node-labels.}}
# During application limit check will considered applications than run on 
completed cluster for a leaf queue (of all partitions).
# When property is not configured default value of 
{{yarn.scheduler.capacity.maximum-applications}} is considered.

Attaching first patch for the same.

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5545.0001.patch, capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:286)
>   at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:296)
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:301)
>   

[jira] [Issue Comment Deleted] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-08-31 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5585:
---
Comment: was deleted

(was: Infact in case of flows within an app, there can be a problem with 
approach above if we have 2 or more flow runs executing simultaneously.)

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-10. But to retrieve next 5 apps, it is 
> difficult.
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-08-31 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453145#comment-15453145
 ] 

Varun Saxena commented on YARN-5585:


Infact in case of flows within an app, there can be a problem with approach 
above if we have 2 or more flow runs executing simultaneously.

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-10. But to retrieve next 5 apps, it is 
> difficult.
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-08-31 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453147#comment-15453147
 ] 

Varun Saxena commented on YARN-5585:


Infact in case of flows within an app, there can be a problem with approach 
above if we have 2 or more flow runs executing simultaneously.

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-10. But to retrieve next 5 apps, it is 
> difficult.
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5577) [Atsv2] Document object passing in infofilters with an example

2016-08-31 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453118#comment-15453118
 ] 

Varun Saxena commented on YARN-5577:


Sorry I missed this one. Will commit it tomorrow.

> [Atsv2] Document object passing in infofilters with an example
> --
>
> Key: YARN-5577
> URL: https://issues.apache.org/jira/browse/YARN-5577
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelinereader, timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: documentation
> Attachments: YARN-5577.patch
>
>
> In HierarchicalTimelineEntity, setparent/addChild allows to set parent/child 
> entities at INFO level. The key is an string and value as an object. 
> Like below, for YARN_CONTAINER entity parent entity set for application.
> {code}
> "SYSTEM_INFO_PARENT_ENTITY": {
>"type": "YARN_APPLICATION",
>"id": "application_1471931266232_0024"
>  }
> {code}
> But to use infofilter on entity type YARN_CONTAINER for an specific 
> applicationId, IIUC there is no way to pass object as value in infofilter. 
> To make easier retrieval either
> # publish parent/child entity id and type as string rather that object like 
> below
> {code}
> "SYSTEM_INFO_PARENT_ENTITY_TYPE": "YARN_APPLICATION"
> "SYSTEM_INFO_PARENT_ENTITY_ID":"application_1471931266232_0024"
> {code}
> OR
> # Add ability to provide object as filter with below format like 
> {{infofilters=SYSTEM_INFO_PARENT_ENTITY eq ((type eq YARN_APPLICATION) AND 
> (id eq application_1471931266232_0024))}}
> I believe 2nd approach will be well applicable for any entities. But I am not 
> sure does HBase supports such a custom filters while scanning a table. 
> 1st approaches will be much easier to change. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5221) Expose UpdateResourceRequest API to allow AM to request for change in container properties

2016-08-31 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5221:
--
Attachment: YARN-5221-branch-2.8-v1.patch

Committed this to branch-2.
Uploading a patch to test this specifically against branch-2.8 as well

> Expose UpdateResourceRequest API to allow AM to request for change in 
> container properties
> --
>
> Key: YARN-5221
> URL: https://issues.apache.org/jira/browse/YARN-5221
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5221-branch-2-v1.patch, 
> YARN-5221-branch-2.8-v1.patch, YARN-5221.001.patch, YARN-5221.002.patch, 
> YARN-5221.003.patch, YARN-5221.004.patch, YARN-5221.005.patch, 
> YARN-5221.006.patch, YARN-5221.007.patch, YARN-5221.008.patch, 
> YARN-5221.009.patch, YARN-5221.010.patch, YARN-5221.011.patch, 
> YARN-5221.012.patch, YARN-5221.013.patch
>
>
> YARN-1197 introduced APIs to allow an AM to request for Increase and Decrease 
> of Container Resources after initial allocation.
> YARN-5085 proposes to allow an AM to request for a change of Container 
> ExecutionType.
> This JIRA proposes to unify both of the above into an Update Container API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-08-31 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453109#comment-15453109
 ] 

Varun Saxena commented on YARN-5585:


bq. Can we translate the fromId request into some HBase filters
Yes, for fetching apps within flow run, we can set the start row in HBase scan 
to achieve this. Application ID part in Application table rowkey is stored as 
12 bytes (inverted cluster timestamp of 8 bytes and inverted sequence number of 
4 bytes). So within the scope of flow run, we can set fromId as application ID 
bit while specifying start row in HBase scan.

For getting apps within a flow, in addition to app id (received from fromId), 
we can specify flow run id as inverted value of max value of long i.e. 0. And 
set this as start row in HBase scan. This would require comparatively more 
matches but should be fine as we will doing row key prefix match.

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-10. But to retrieve next 5 apps, it is 
> difficult.
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-08-31 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453070#comment-15453070
 ] 

Li Lu commented on YARN-5585:
-

Can we translate the fromId request into some HBase filters so that we can 
process this request on the storage layer? I agree with [~varun_saxena] that 
supporting fromId for containers may be different. Containers are not top-level 
concept for timeline service, so unless there is a strong enough reason, I'd 
incline to not to introduce a separate for containers. 

bq. But once rows are retrieved from HBase, it is sorted as 
TimelineEntity#compareTo provided. 
We can certainly do this, but note that this requires some in-memory operation 
to actually sort all entities, but not only read part of them out from the 
storage? 

bq. However, the problem with this kind of an approach is that new apps keep on 
getting added so result may not be latest.
I'm fine if the results are not the "latest". Once the system behaves in a 
linearizable fashion (results are consistent according to time) we're fine. 


> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-10. But to retrieve next 5 apps, it is 
> difficult.
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5554) MoveApplicationAcrossQueues does not check user permission on the target queue

2016-08-31 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453039#comment-15453039
 ] 

Yufei Gu commented on YARN-5554:


Thanks [~wilfreds] for working on this. The patch looks good to me generally. 
My one concern is that {{moveApplicationAcrossQueues}} doesn't check if the 
queue exists first, neither do its callers. I can see it's OK its callee like 
{{queueACLsManager.checkAccess}} did the NULL check of the target queue but the 
LOG information seems vague. What about to do the NULL check in 
{{moveApplicationAcrossQueues}} and provide the explicitly information of 
"target queue does't exist" and remove the NULL check in its callee like 
function {{queueACLsManager.checkAccess}} added in this patch.

> MoveApplicationAcrossQueues does not check user permission on the target queue
> --
>
> Key: YARN-5554
> URL: https://issues.apache.org/jira/browse/YARN-5554
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Haibo Chen
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-5554.2.patch, YARN-5554.3.patch
>
>
> moveApplicationAcrossQueues operation currently does not check user 
> permission on the target queue. This incorrectly allows one user to move 
> his/her own applications to a queue that the user has no access to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4793) [Umbrella] Simplified API layer for services and beyond

2016-08-31 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453011#comment-15453011
 ] 

Vinod Kumar Vavilapalli commented on YARN-4793:
---

[~gsaha] / [~jianhe], can we please move this initial patch to a sub-task under 
this JIRA? That way, as you keep making more progress on other items, each of 
them can be their own sub-tasks. Thanks!

> [Umbrella] Simplified API layer for services and beyond
> ---
>
> Key: YARN-4793
> URL: https://issues.apache.org/jira/browse/YARN-4793
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Gour Saha
> Attachments: 20160603-YARN-Simplified-V1-API-Examples.adoc, 
> 20160603-YARN-Simplified-V1-API-Layer-For-Services.pdf, 
> 20160603-YARN-Simplified-V1-API-Layer-For-Services.yaml, 
> YARN-4793-yarn-native-services.001.patch
>
>
> [See overview doc at YARN-4692, modifying and copy-pasting some of the 
> relevant pieces and sub-section 3.3.2 to track the specific sub-item.]
> Bringing a new service on YARN today is not a simple experience. The APIs of 
> existing frameworks are either too low­ level (native YARN), require writing 
> new code (for frameworks with programmatic APIs ) or writing a complex spec 
> (for declarative frameworks).
> In addition to building critical building blocks inside YARN (as part of 
> other efforts at YARN-4692), we should also look to simplifying the user 
> facing story for building services. Experience of projects like Slider 
> building real-­life services like HBase, Storm, Accumulo, Solr etc gives us 
> some very good learnings on how simplified APIs for building services will 
> look like.
> To this end, we should look at a new simple-services API layer backed by REST 
> interfaces. The REST layer can act as a single point of entry for creation 
> and lifecycle management of YARN services. Services here can range from 
> simple single-­component apps to the most complex, multi­-component 
> applications needing special orchestration needs.
> We should also look at making this a unified REST based entry point for other 
> important features like resource­-profile management (YARN-3926), 
> package-definitions' lifecycle­-management and service­-discovery (YARN-913 / 
> YARN-4757). We also need to flesh out its relation to our present much ­lower 
> level REST APIs (YARN-1695) in YARN for application-­submission and 
> management.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5607) Document TestContainerResourceUsage#waitForContainerCompletion

2016-08-31 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452992#comment-15452992
 ] 

Karthik Kambatla commented on YARN-5607:


I am open to any of the options - (1) adding to MockRM, (2) a separate util 
class for operations on MockRM. If I am forced to pick, I might go with the 
latter just to keep the size of MockRM small. 

> Document TestContainerResourceUsage#waitForContainerCompletion
> --
>
> Key: YARN-5607
> URL: https://issues.apache.org/jira/browse/YARN-5607
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: resourcemanager, test
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>  Labels: newbie
>
> The logic in TestContainerResourceUsage#waitForContainerCompletion 
> (introduced in YARN-5024) is not immediately obvious. It could use some 
> documentation. Also, this seems like a useful helper method. Should this be 
> moved to one of the mock classes or to a util class? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5602) Utils for Federation State and Policy Store

2016-08-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452989#comment-15452989
 ] 

Hadoop QA commented on YARN-5602:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 30s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
55s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 57s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
25s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
41s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s 
{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s 
{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 34s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 22s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826466/YARN-5602-YARN-2915.v2.patch
 |
| JIRA Issue | YARN-5602 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux a5f3a73450b8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / c77269d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| javadoc | 

[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-08-31 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452972#comment-15452972
 ] 

Li Lu commented on YARN-5561:
-

OK let me clarify: IMO the reader API of YARN timeline service should focus on 
serving timeline entities according to caller's request, but not on how to 
serve YARN specific use cases. To the storage layer of timeline service, 
requesting "container info" should be similar to requesting distributed shell 
application information or Tez job information. I noticed that in this patch, 
we're passing some predefined constants, like:
{code}
String entityType = TimelineEntityType.YARN_CONTAINER.toString();
{code}
This will query for a specific type of timeline entities. We may want to 
provide a different endpoint (like /ws/v2/applicationhistory) to support this 
YARN specific use case. 

In v1, we have AHSWebServices to support YARN specific application history 
information. Maybe we would like to keep the same way? 

This is my own (and subjective) idea. Feel free to let me know if you noticed 
some critical things I'm missing... Thanks! 

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5598) [YARN-3368] Fix create-release to be able to generate bits for the new yarn-ui

2016-08-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452947#comment-15452947
 ] 

Sunil G commented on YARN-5598:
---

General create-release script change is now building UI artifacts. Verified in 
ubuntu env. However I have not checked docker. I will verify that and update 
here after setting a docker environment. Thanks [~leftnoteasy]

> [YARN-3368] Fix create-release to be able to generate bits for the new yarn-ui
> --
>
> Key: YARN-5598
> URL: https://issues.apache.org/jira/browse/YARN-5598
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn, yarn-ui-v2
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5598-YARN-3368.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses.

2016-08-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452939#comment-15452939
 ] 

Sunil G commented on YARN-4849:
---

Committed addendum patch to fix asf warnings to branch. Thanks Wangda.

> [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix 
> licenses.
> ---
>
> Key: YARN-4849
> URL: https://issues.apache.org/jira/browse/YARN-4849
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: YARN-3368
>
> Attachments: YARN-4849-YARN-3368.1.patch, 
> YARN-4849-YARN-3368.2.patch, YARN-4849-YARN-3368.3.patch, 
> YARN-4849-YARN-3368.4.patch, YARN-4849-YARN-3368.5.patch, 
> YARN-4849-YARN-3368.6.patch, YARN-4849-YARN-3368.7.patch, 
> YARN-4849-YARN-3368.8.patch, YARN-4849-YARN-3368.addendum.1.patch, 
> YARN-4849-YARN-3368.addendum.2.patch, YARN-4849-YARN-3368.addendum.3.patch, 
> YARN-4849-YARN-3368.doc-fix-08172016.1.patch, 
> YARN-4849-YARN-3368.doc-fix-08232016.1.patch, 
> YARN-4849-YARN-3368.license-fix-08172016.1.patch, 
> YARN-4849-YARN-3368.license-fix-08232016.1.patch, 
> YARN-4849-YARN-3368.rat-fix-08302016.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses.

2016-08-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452908#comment-15452908
 ] 

Sunil G commented on YARN-4849:
---

+1. Tested in my local cluster and ran rat plugin. Committing the same.

> [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix 
> licenses.
> ---
>
> Key: YARN-4849
> URL: https://issues.apache.org/jira/browse/YARN-4849
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: YARN-3368
>
> Attachments: YARN-4849-YARN-3368.1.patch, 
> YARN-4849-YARN-3368.2.patch, YARN-4849-YARN-3368.3.patch, 
> YARN-4849-YARN-3368.4.patch, YARN-4849-YARN-3368.5.patch, 
> YARN-4849-YARN-3368.6.patch, YARN-4849-YARN-3368.7.patch, 
> YARN-4849-YARN-3368.8.patch, YARN-4849-YARN-3368.addendum.1.patch, 
> YARN-4849-YARN-3368.addendum.2.patch, YARN-4849-YARN-3368.addendum.3.patch, 
> YARN-4849-YARN-3368.doc-fix-08172016.1.patch, 
> YARN-4849-YARN-3368.doc-fix-08232016.1.patch, 
> YARN-4849-YARN-3368.license-fix-08172016.1.patch, 
> YARN-4849-YARN-3368.license-fix-08232016.1.patch, 
> YARN-4849-YARN-3368.rat-fix-08302016.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5602) Utils for Federation State and Policy Store

2016-08-31 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-5602:
---
Attachment: YARN-5602-YARN-2915.v2.patch

> Utils for Federation State and Policy Store
> ---
>
> Key: YARN-5602
> URL: https://issues.apache.org/jira/browse/YARN-5602
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5602-YARN-2915.v1.patch, 
> YARN-5602-YARN-2915.v2.patch
>
>
> This JIRA tracks the creation of utils for Federation State and Policy Store 
> such as Error Codes, Exceptions...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5602) Utils for Federation State and Policy Store

2016-08-31 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452884#comment-15452884
 ] 

Giovanni Matteo Fumarola commented on YARN-5602:


Attached V2 with the Checkstyle fixes.

> Utils for Federation State and Policy Store
> ---
>
> Key: YARN-5602
> URL: https://issues.apache.org/jira/browse/YARN-5602
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5602-YARN-2915.v1.patch, 
> YARN-5602-YARN-2915.v2.patch
>
>
> This JIRA tracks the creation of utils for Federation State and Policy Store 
> such as Error Codes, Exceptions...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5607) Document TestContainerResourceUsage#waitForContainerCompletion

2016-08-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452882#comment-15452882
 ] 

Sunil G commented on YARN-5607:
---

Yes, a common util class for all Mock** classes will be better. waitFor 
could be moved there. I can try help here if we have common consensus to this 
approach. Thoughts?

> Document TestContainerResourceUsage#waitForContainerCompletion
> --
>
> Key: YARN-5607
> URL: https://issues.apache.org/jira/browse/YARN-5607
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: resourcemanager, test
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>  Labels: newbie
>
> The logic in TestContainerResourceUsage#waitForContainerCompletion 
> (introduced in YARN-5024) is not immediately obvious. It could use some 
> documentation. Also, this seems like a useful helper method. Should this be 
> moved to one of the mock classes or to a util class? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4945) [Umbrella] Capacity Scheduler Preemption Within a queue

2016-08-31 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-4945:
--
Attachment: YARN-2009-wip.2.patch

Attaching wip patch with UT case.

> [Umbrella] Capacity Scheduler Preemption Within a queue
> ---
>
> Key: YARN-4945
> URL: https://issues.apache.org/jira/browse/YARN-4945
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
> Attachments: Intra-Queue Preemption Use Cases.pdf, 
> IntraQueuepreemption-CapacityScheduler (Design).pdf, YARN-2009-wip.2.patch, 
> YARN-2009-wip.patch
>
>
> This is umbrella ticket to track efforts of preemption within a queue to 
> support features like:
> YARN-2009. YARN-2113. YARN-4781.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5607) Document TestContainerResourceUsage#waitForContainerCompletion

2016-08-31 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452799#comment-15452799
 ] 

Rohith Sharma K S commented on YARN-5607:
-

As of now, all the wait-for*** methods are in MockRM. I think it can be moved 
to MockRM. Regarding util methods, basically need an refactoring test helper 
methods which are in multiple places.

> Document TestContainerResourceUsage#waitForContainerCompletion
> --
>
> Key: YARN-5607
> URL: https://issues.apache.org/jira/browse/YARN-5607
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: resourcemanager, test
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>  Labels: newbie
>
> The logic in TestContainerResourceUsage#waitForContainerCompletion 
> (introduced in YARN-5024) is not immediately obvious. It could use some 
> documentation. Also, this seems like a useful helper method. Should this be 
> moved to one of the mock classes or to a util class? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4855) Should check if node exists when replace nodelabels

2016-08-31 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452779#comment-15452779
 ] 

Naganarasimha G R commented on YARN-4855:
-

sorry for the delay [~Tao Jie], Will review tomorrow India time, but can you 
check the checkstyle issues are related to the patch and any of it can be 
addressed ?

> Should check if node exists when replace nodelabels
> ---
>
> Key: YARN-4855
> URL: https://issues.apache.org/jira/browse/YARN-4855
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-4855.001.patch, YARN-4855.002.patch, 
> YARN-4855.003.patch, YARN-4855.004.patch, YARN-4855.005.patch
>
>
> Today when we add nodelabels to nodes, it would succeed even if nodes are not 
> existing NodeManger in cluster without any message.
> It could be like this:
> When we use *yarn rmadmin -replaceLabelsOnNode --fail-on-unkown-nodes 
> "node1=label1"* , it would be denied if node is unknown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5607) Document TestContainerResourceUsage#waitForContainerCompletion

2016-08-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452747#comment-15452747
 ] 

Sunil G commented on YARN-5607:
---

Yes [~kasha]. That make sense.
Or could we clean it up a lil better by removing extra container comparison and 
keep in MockRM itself or to a new common test util class for MockRM. A cleanup 
could be done from MockRM to use another util for such methods. Sounds good? 
cc/[~rohithsharma]

> Document TestContainerResourceUsage#waitForContainerCompletion
> --
>
> Key: YARN-5607
> URL: https://issues.apache.org/jira/browse/YARN-5607
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: resourcemanager, test
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>  Labels: newbie
>
> The logic in TestContainerResourceUsage#waitForContainerCompletion 
> (introduced in YARN-5024) is not immediately obvious. It could use some 
> documentation. Also, this seems like a useful helper method. Should this be 
> moved to one of the mock classes or to a util class? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses.

2016-08-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452667#comment-15452667
 ] 

Sunil G commented on YARN-4849:
---

Looks fine for me. I applied patch and  built a tar ball. Also tried to deploy 
the same.
I can commit if its fine.

> [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix 
> licenses.
> ---
>
> Key: YARN-4849
> URL: https://issues.apache.org/jira/browse/YARN-4849
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: YARN-3368
>
> Attachments: YARN-4849-YARN-3368.1.patch, 
> YARN-4849-YARN-3368.2.patch, YARN-4849-YARN-3368.3.patch, 
> YARN-4849-YARN-3368.4.patch, YARN-4849-YARN-3368.5.patch, 
> YARN-4849-YARN-3368.6.patch, YARN-4849-YARN-3368.7.patch, 
> YARN-4849-YARN-3368.8.patch, YARN-4849-YARN-3368.addendum.1.patch, 
> YARN-4849-YARN-3368.addendum.2.patch, YARN-4849-YARN-3368.addendum.3.patch, 
> YARN-4849-YARN-3368.doc-fix-08172016.1.patch, 
> YARN-4849-YARN-3368.doc-fix-08232016.1.patch, 
> YARN-4849-YARN-3368.license-fix-08172016.1.patch, 
> YARN-4849-YARN-3368.license-fix-08232016.1.patch, 
> YARN-4849-YARN-3368.rat-fix-08302016.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5566) client-side NM graceful decom doesn't trigger when jobs finish

2016-08-31 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452636#comment-15452636
 ] 

Karthik Kambatla commented on YARN-5566:


+1. Will commit this later today. 

[~djp] - could you take a quick look?

> client-side NM graceful decom doesn't trigger when jobs finish
> --
>
> Key: YARN-5566
> URL: https://issues.apache.org/jira/browse/YARN-5566
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-5566.001.patch, YARN-5566.002.patch, 
> YARN-5566.003.patch, YARN-5566.004.patch
>
>
> I was testing the client-side NM graceful decommission and noticed that it 
> was always waiting for the timeout, even if all jobs running on that node (or 
> even the cluster) had already finished.
> For example:
> # JobA is running with at least one container on NodeA
> # User runs client-side decom on NodeA at 5:00am with a timeout of 3 hours 
> --> NodeA enters DECOMMISSIONING state
> # JobA finishes at 6:00am and there are no other jobs running on NodeA
> # User's client reaches the timeout at 8:00am, and forcibly decommissions 
> NodeA
> NodeA should have decommissioned at 6:00am.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-08-31 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452583#comment-15452583
 ] 

Varun Saxena commented on YARN-5585:


If the use case is only for apps then the row keys in application table are 
stored in sorted manner (in descending order) within the scope of a flow / flow 
run.
And we can easily support fromId alongwith limit to achieve some sort of 
pagination here without any performance penalty.

However, the problem with this kind of an approach is that new apps keep on 
getting added so result may not be latest. For instance, if there are 100 apps 
app100-app1 in ATS and we show 10 apps on each page. Then, if we move to page 3 
we will show apps from app80-app71 but it is possible that say 5 more apps get 
added in the meantime i.e. we not have app105 to app1 in ATS.
Ideally page 3 should then show app85-app76.

Entities in entity table though are not sorted because entity could be anything.
If we have a similar use case for containers, we can consider separating it out 
to a different table and have special handling for it. But there should be a 
use case for it.

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-10. But to retrieve next 5 apps, it is 
> difficult.
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-08-31 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452583#comment-15452583
 ] 

Varun Saxena edited comment on YARN-5585 at 8/31/16 3:51 PM:
-

If the use case is only for apps then the row keys in application table are 
stored in sorted manner (in descending order) within the scope of a flow / flow 
run.
And we can easily support fromId alongwith limit to achieve some sort of 
pagination here without any performance penalty.

However, the problem with this kind of an approach is that new apps keep on 
getting added so result may not be latest. For instance, if there are 100 apps 
app100-app1 in ATS and we show 10 apps on each page. Then, if we move to page 3 
we will show apps from app80-app71 but it is possible that say 5 more apps get 
added in the meantime i.e. we now have app105 to app1 in ATS.
Ideally page 3 should then show app85-app76. But I guess this would have 
already been considered.

Entities in entity table though are not sorted because entity could be anything.
If we have a similar use case for containers, we can consider separating it out 
to a different table and have special handling for it. But there should be a 
use case for it.


was (Author: varun_saxena):
If the use case is only for apps then the row keys in application table are 
stored in sorted manner (in descending order) within the scope of a flow / flow 
run.
And we can easily support fromId alongwith limit to achieve some sort of 
pagination here without any performance penalty.

However, the problem with this kind of an approach is that new apps keep on 
getting added so result may not be latest. For instance, if there are 100 apps 
app100-app1 in ATS and we show 10 apps on each page. Then, if we move to page 3 
we will show apps from app80-app71 but it is possible that say 5 more apps get 
added in the meantime i.e. we not have app105 to app1 in ATS.
Ideally page 3 should then show app85-app76.

Entities in entity table though are not sorted because entity could be anything.
If we have a similar use case for containers, we can consider separating it out 
to a different table and have special handling for it. But there should be a 
use case for it.

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-10. But to retrieve next 5 apps, it is 
> difficult.
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >