[jira] [Commented] (YARN-353) Add Zookeeper-based store implementation for RMStateStore
[ https://issues.apache.org/jira/browse/YARN-353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698750#comment-13698750 ] Devaraj K commented on YARN-353: The latest patch looks good to me except one nit. {code:xml} + public static final String DEFAULT_ZK_RM_STATE_STORE_PARENT_PATH = rmstore; {code} Here this path should start with '/', otherwise zkclient will throw IllegalArgumentException. Add Zookeeper-based store implementation for RMStateStore - Key: YARN-353 URL: https://issues.apache.org/jira/browse/YARN-353 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Reporter: Hitesh Shah Assignee: Bikas Saha Attachments: YARN-353.1.patch, YARN-353.2.patch, YARN-353.3.patch Add store that write RM state data to ZK -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (YARN-897) CapacityScheduler wrongly sorted queues
Djellel Eddine Difallah created YARN-897: Summary: CapacityScheduler wrongly sorted queues Key: YARN-897 URL: https://issues.apache.org/jira/browse/YARN-897 Project: Hadoop YARN Issue Type: Bug Components: capacityscheduler Reporter: Djellel Eddine Difallah The childQueues of a ParentQueue are stored in a TreeSet where UsedCapacity defines the sort order. This ensures the queue with least UsedCapacity to receive resources next. On containerAssignment we correctly update the order, but we miss to do so on container completions. This corrupts the TreeSet structure, and under-capacity queues might starve for resources. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-897) CapacityScheduler wrongly sorted queues
[ https://issues.apache.org/jira/browse/YARN-897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699204#comment-13699204 ] Carlo Curino commented on YARN-897: --- The childQueues of a ParentQueue are stored in a TreeSet where UsedCapacity defines the sort order. I believe for the scheduler to work correctly, we must maintain this order explicitly. When a new container is assigned to an application, the correposnding queue is removed and readded, maintain the order. When a container completes however the UsedCapacity of the queue is changed, but we don't resort the childQueues. This means the TreeSet assumptions are not maintained, and we might miss to assign containers to this queue. Example: Parent queue (root) has four child queues with capacities (A:25%, B:25%, C:25%, D:25%). The cluster has 10GB of resources with a minimum allocation of 1GB. 1- Through some history we got to assign 1,2,3,4 containers respectively to the queues (note: container = 1GB): status child-queues: root.a(0.4), root.b(0.8), root.c(1.2), root.d(1.6) 2- 3 containers from D complete, status child-queues: root.a(0.4), root.b(0.8), root.c(1.2), root.d(0.4) 3- Now if A and B keep receiving and releasing containers without ever passing the 1.2 mark of C we might have D being stuck behind C and never receive containers. In practice this might not show up often because of reservations (that bypass this ordering). If D has reservations pending it might get at least one container, and this will trigger the resorting, thus un-stucking it. Nonetheless this should be addressed. I discussed this briefly with few folks at Hadoop Summit and we seemed to confirm the problem, but we should double check further. [~dedcode] will post a small test that triggers the issue, and an idea of patch soon... comments welcome. CapacityScheduler wrongly sorted queues --- Key: YARN-897 URL: https://issues.apache.org/jira/browse/YARN-897 Project: Hadoop YARN Issue Type: Bug Components: capacityscheduler Reporter: Djellel Eddine Difallah The childQueues of a ParentQueue are stored in a TreeSet where UsedCapacity defines the sort order. This ensures the queue with least UsedCapacity to receive resources next. On containerAssignment we correctly update the order, but we miss to do so on container completions. This corrupts the TreeSet structure, and under-capacity queues might starve for resources. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-897) CapacityScheduler wrongly sorted queues
[ https://issues.apache.org/jira/browse/YARN-897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Djellel Eddine Difallah updated YARN-897: - Attachment: TestBugParentQueue.java Simple JUnit Test that triggers the bug. CapacityScheduler wrongly sorted queues --- Key: YARN-897 URL: https://issues.apache.org/jira/browse/YARN-897 Project: Hadoop YARN Issue Type: Bug Components: capacityscheduler Reporter: Djellel Eddine Difallah Attachments: TestBugParentQueue.java The childQueues of a ParentQueue are stored in a TreeSet where UsedCapacity defines the sort order. This ensures the queue with least UsedCapacity to receive resources next. On containerAssignment we correctly update the order, but we miss to do so on container completions. This corrupts the TreeSet structure, and under-capacity queues might starve for resources. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (YARN-898) Snapshot support for distcp
Binglin Chang created YARN-898: -- Summary: Snapshot support for distcp Key: YARN-898 URL: https://issues.apache.org/jira/browse/YARN-898 Project: Hadoop YARN Issue Type: Bug Affects Versions: 2.1.0-beta Reporter: Binglin Chang Add snapshot incremental copy ability to distcp, so we can do iterative consistent backup between hadoop clusters. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (YARN-898) Snapshot support for distcp
[ https://issues.apache.org/jira/browse/YARN-898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Binglin Chang resolved YARN-898. Resolution: Invalid sorry, should be in HADOOP Snapshot support for distcp --- Key: YARN-898 URL: https://issues.apache.org/jira/browse/YARN-898 Project: Hadoop YARN Issue Type: Bug Affects Versions: 2.1.0-beta Reporter: Binglin Chang Add snapshot incremental copy ability to distcp, so we can do iterative consistent backup between hadoop clusters. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-649) Make container logs available over HTTP in plain text
[ https://issues.apache.org/jira/browse/YARN-649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699270#comment-13699270 ] Zhijie Shen commented on YARN-649: -- bq. Oops, leaving in MediaType.APPLICATION_JSON was a mistake. My intention was actually to have it only support plain text. Thoughts? For MAPREDUCE-4362 and YARN-675, I think TEXT is enough. However, if it does no harm, how about leaving more media type options to users? bq. My goal here was to implement the minimum needed to work on MAPREDUCE-4362 and YARN-675. Agree. Maybe the enhancement can be discussed in YARN-896 later. Make container logs available over HTTP in plain text - Key: YARN-649 URL: https://issues.apache.org/jira/browse/YARN-649 Project: Hadoop YARN Issue Type: Improvement Components: nodemanager Affects Versions: 2.0.4-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: YARN-649-2.patch, YARN-649-3.patch, YARN-649-4.patch, YARN-649.patch, YARN-752-1.patch It would be good to make container logs available over the REST API for MAPREDUCE-4362 and so that they can be accessed programatically in general. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-353) Add Zookeeper-based store implementation for RMStateStore
[ https://issues.apache.org/jira/browse/YARN-353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-353: - Attachment: YARN-353.4.patch right, thanks for pointing that out. Updated the patch, made some changes along with that. Add Zookeeper-based store implementation for RMStateStore - Key: YARN-353 URL: https://issues.apache.org/jira/browse/YARN-353 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Reporter: Hitesh Shah Assignee: Bikas Saha Attachments: YARN-353.1.patch, YARN-353.2.patch, YARN-353.3.patch, YARN-353.4.patch Add store that write RM state data to ZK -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-353) Add Zookeeper-based store implementation for RMStateStore
[ https://issues.apache.org/jira/browse/YARN-353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699305#comment-13699305 ] Hadoop QA commented on YARN-353: {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12590691/YARN-353.4.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/1420//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1420//console This message is automatically generated. Add Zookeeper-based store implementation for RMStateStore - Key: YARN-353 URL: https://issues.apache.org/jira/browse/YARN-353 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Reporter: Hitesh Shah Assignee: Bikas Saha Attachments: YARN-353.1.patch, YARN-353.2.patch, YARN-353.3.patch, YARN-353.4.patch Add store that write RM state data to ZK -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-649) Make container logs available over HTTP in plain text
[ https://issues.apache.org/jira/browse/YARN-649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699312#comment-13699312 ] Alejandro Abdelnur commented on YARN-649: - if support JSON/XML, how will you wrap up the logs in there? Is it worth? Make container logs available over HTTP in plain text - Key: YARN-649 URL: https://issues.apache.org/jira/browse/YARN-649 Project: Hadoop YARN Issue Type: Improvement Components: nodemanager Affects Versions: 2.0.4-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: YARN-649-2.patch, YARN-649-3.patch, YARN-649-4.patch, YARN-649.patch, YARN-752-1.patch It would be good to make container logs available over the REST API for MAPREDUCE-4362 and so that they can be accessed programatically in general. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-896) Roll up for long lived YARN
[ https://issues.apache.org/jira/browse/YARN-896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699314#comment-13699314 ] Steve Loughran commented on YARN-896: - Based on our Hoya, HBase on YARN work: * we need a restarted AM to be given the existing set of containers from its previous instance. The use case there is region servers should stay up while the AM and master are restarted. * maybe: be able to warn YARN that the services will be long-lived. That could be used in scheduling and placement. * anti-affinity is needed to declare that different container instances SHOULD be deployed on different nodes (use case: region servers). If failure domains are supported in the topology, anti-affinity should use that. I don't know if we'd want best-effort vs absolute requirements. * add ability to increase requirements of running containers, e.g. say this service is using more RAM than expected, reduce the amount available to others. * maybe: ability to send kill signals to container processes, to do a graceful kill before escalating. This is of limited value if an extra process (such as {{bin/hbase}}) intervenes in the startup process. There's also long-lived service discovery, a topic for another JIRA Roll up for long lived YARN --- Key: YARN-896 URL: https://issues.apache.org/jira/browse/YARN-896 Project: Hadoop YARN Issue Type: New Feature Reporter: Robert Joseph Evans YARN is intended to be general purpose, but it is missing some features to be able to truly support long lived applications and long lived containers. This ticket is intended to # discuss what is needed to support long lived processes # track the resulting JIRA. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter
[ https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699325#comment-13699325 ] Vinod Kumar Vavilapalli commented on YARN-727: -- bq. To clarify on my previous comment, we need @vinodkv to confirm whether GetAllApplicationsRequest can be changed. As was pointed on the dev mailing list on the release thread, we should get this in. So, yeah, let's do this. ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter Key: YARN-727 URL: https://issues.apache.org/jira/browse/YARN-727 Project: Hadoop YARN Issue Type: Sub-task Affects Versions: 2.1.0-beta Reporter: Siddharth Seth Assignee: Xuan Gong Attachments: YARN-727.10.patch, YARN-727.11.patch, YARN-727.12.patch, YARN-727.13.patch, YARN-727.14.patch, YARN-727.15.patch, YARN-727.16.patch, YARN-727.1.patch, YARN-727.2.patch, YARN-727.3.patch, YARN-727.4.patch, YARN-727.5.patch, YARN-727.6.patch, YARN-727.7.patch, YARN-727.8.patch, YARN-727.9.patch Now that an ApplicationType is registered on ApplicationSubmission, getAllApplications should be able to use this string to query for a specific application type. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter
[ https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-727: - Priority: Blocker (was: Major) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter Key: YARN-727 URL: https://issues.apache.org/jira/browse/YARN-727 Project: Hadoop YARN Issue Type: Sub-task Affects Versions: 2.1.0-beta Reporter: Siddharth Seth Assignee: Xuan Gong Priority: Blocker Attachments: YARN-727.10.patch, YARN-727.11.patch, YARN-727.12.patch, YARN-727.13.patch, YARN-727.14.patch, YARN-727.15.patch, YARN-727.16.patch, YARN-727.1.patch, YARN-727.2.patch, YARN-727.3.patch, YARN-727.4.patch, YARN-727.5.patch, YARN-727.6.patch, YARN-727.7.patch, YARN-727.8.patch, YARN-727.9.patch Now that an ApplicationType is registered on ApplicationSubmission, getAllApplications should be able to use this string to query for a specific application type. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-791) Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API
[ https://issues.apache.org/jira/browse/YARN-791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-791: - Priority: Blocker (was: Major) Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API - Key: YARN-791 URL: https://issues.apache.org/jira/browse/YARN-791 Project: Hadoop YARN Issue Type: Sub-task Components: api, resourcemanager Affects Versions: 2.0.4-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza Priority: Blocker Attachments: YARN-791-1.patch, YARN-791-2.patch, YARN-791-3.patch, YARN-791-4.patch, YARN-791-5.patch, YARN-791-6.patch, YARN-791.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-791) Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API
[ https://issues.apache.org/jira/browse/YARN-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699328#comment-13699328 ] Vinod Kumar Vavilapalli commented on YARN-791: -- Marked it as blocker, let's do this. Sorry for all the wavering, +1 to Hitesh's proposal. bq. I would like to recommend that the command line be changed to return all nodes too ( with a different option to get only healthy nodes ). However, I am ok with the command line remaining as is today with additional options to get all nodes with better filtering support. Let's leave the command line as is with additional filtering support. bq. The HTTP API would be URL[?filter=STATE+]. if filter= param is not specified means ALL. if filter= param is specified and it is empty or invalid we return an ERROR response. bq. The change of param name from state to filter seems also a bit more correct and self explanatory. This will not be future proof if and when we want to add filters based on other properties. I'd go with 'state' to be explicit. Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API - Key: YARN-791 URL: https://issues.apache.org/jira/browse/YARN-791 Project: Hadoop YARN Issue Type: Sub-task Components: api, resourcemanager Affects Versions: 2.0.4-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza Priority: Blocker Attachments: YARN-791-1.patch, YARN-791-2.patch, YARN-791-3.patch, YARN-791-4.patch, YARN-791-5.patch, YARN-791-6.patch, YARN-791.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-791) Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API
[ https://issues.apache.org/jira/browse/YARN-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699336#comment-13699336 ] Alejandro Abdelnur commented on YARN-791: - IMO, we should have consistency in all interfaces (HTTP, Java, PB, CLI). returning ALL if no filter is specified seems intuitive/expected to me, but if others disagree I won't opposed. Please make up a decision and we get this one in. Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API - Key: YARN-791 URL: https://issues.apache.org/jira/browse/YARN-791 Project: Hadoop YARN Issue Type: Sub-task Components: api, resourcemanager Affects Versions: 2.0.4-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza Priority: Blocker Attachments: YARN-791-1.patch, YARN-791-2.patch, YARN-791-3.patch, YARN-791-4.patch, YARN-791-5.patch, YARN-791-6.patch, YARN-791.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-791) Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API
[ https://issues.apache.org/jira/browse/YARN-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699346#comment-13699346 ] Sandy Ryza commented on YARN-791: - Chatted with Vinod about this offline. The thinking is that because the command line and web UI are displays, they should return what's most convenient (i.e. only active nodes). But the APIs (Java, webservice, protobuf) should be consistent with each other and return all nodes. Working on a patch that implements this. Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API - Key: YARN-791 URL: https://issues.apache.org/jira/browse/YARN-791 Project: Hadoop YARN Issue Type: Sub-task Components: api, resourcemanager Affects Versions: 2.0.4-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza Priority: Blocker Attachments: YARN-791-1.patch, YARN-791-2.patch, YARN-791-3.patch, YARN-791-4.patch, YARN-791-5.patch, YARN-791-6.patch, YARN-791.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-897) CapacityScheduler wrongly sorted queues
[ https://issues.apache.org/jira/browse/YARN-897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699373#comment-13699373 ] Djellel Eddine Difallah commented on YARN-897: -- We spotted this bug while experimenting on dynamic queues updates. The TreeSet methods .contains() and .remove() failed on retrieving a queue that we knew was there, and that gave us a hint that the tree was unsorted properly. The attached test is a [simple junit test | https://issues.apache.org/jira/secure/attachment/12590676/TestBugParentQueue.java] inspired by the already available capacity scheduler tests. It does simulate the scenario that [~curino] describes above and displays the order in which the childQueues is left after a couple of container assignments and completions. I will post a first version of a patch that re-inserts the recently completed container's queue (and all its parents) into their respective parents' childQueues. CapacityScheduler wrongly sorted queues --- Key: YARN-897 URL: https://issues.apache.org/jira/browse/YARN-897 Project: Hadoop YARN Issue Type: Bug Components: capacityscheduler Reporter: Djellel Eddine Difallah Attachments: TestBugParentQueue.java The childQueues of a ParentQueue are stored in a TreeSet where UsedCapacity defines the sort order. This ensures the queue with least UsedCapacity to receive resources next. On containerAssignment we correctly update the order, but we miss to do so on container completions. This corrupts the TreeSet structure, and under-capacity queues might starve for resources. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (YARN-899) Get queue administration ACLs working
Sandy Ryza created YARN-899: --- Summary: Get queue administration ACLs working Key: YARN-899 URL: https://issues.apache.org/jira/browse/YARN-899 Project: Hadoop YARN Issue Type: Bug Components: scheduler Affects Versions: 2.1.0-beta Reporter: Sandy Ryza The Capacity Scheduler documents the yarn.scheduler.capacity.root.queue-path.acl_administer_queue config option for controlling who can administer a queue, but it is not hooked up to anything. The Fair Scheduler could make use of a similar option as well. This is a feature-parity regression from MR1. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-791) Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API
[ https://issues.apache.org/jira/browse/YARN-791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sandy Ryza updated YARN-791: Attachment: YARN-791-7.patch Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API - Key: YARN-791 URL: https://issues.apache.org/jira/browse/YARN-791 Project: Hadoop YARN Issue Type: Sub-task Components: api, resourcemanager Affects Versions: 2.0.4-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza Priority: Blocker Attachments: YARN-791-1.patch, YARN-791-2.patch, YARN-791-3.patch, YARN-791-4.patch, YARN-791-5.patch, YARN-791-6.patch, YARN-791-7.patch, YARN-791.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-791) Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API
[ https://issues.apache.org/jira/browse/YARN-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699455#comment-13699455 ] Sandy Ryza commented on YARN-791: - Uploaded patch that makes the discussed changes. I had a lot of trouble with getting adding the states filter to the command line in a satisfactory way with the commons libraries, so I left that out. In the interest of time, it might be better to save this for future work. It will be a backwards-compatible change, right? Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API - Key: YARN-791 URL: https://issues.apache.org/jira/browse/YARN-791 Project: Hadoop YARN Issue Type: Sub-task Components: api, resourcemanager Affects Versions: 2.0.4-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza Priority: Blocker Attachments: YARN-791-1.patch, YARN-791-2.patch, YARN-791-3.patch, YARN-791-4.patch, YARN-791-5.patch, YARN-791-6.patch, YARN-791-7.patch, YARN-791.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-523) Container localization failures aren't reported from NM to RM
[ https://issues.apache.org/jira/browse/YARN-523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699470#comment-13699470 ] Jian He commented on YARN-523: -- Looked at the code and tried on single node cluster, it turns out NM is already reporting container localization diagnostics back to RM, will close it. Container localization failures aren't reported from NM to RM - Key: YARN-523 URL: https://issues.apache.org/jira/browse/YARN-523 Project: Hadoop YARN Issue Type: Sub-task Reporter: Vinod Kumar Vavilapalli Assignee: Omkar Vinit Joshi This is mainly a pain on crashing AMs, but once we fix this, containers also can benefit - same fix for both. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-899) Get queue administration ACLs working
[ https://issues.apache.org/jira/browse/YARN-899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins updated YARN-899: - Target Version/s: 2.1.0-beta Worth considering as a GA blocker since this is a regression vs MR1. Get queue administration ACLs working - Key: YARN-899 URL: https://issues.apache.org/jira/browse/YARN-899 Project: Hadoop YARN Issue Type: Bug Components: scheduler Affects Versions: 2.1.0-beta Reporter: Sandy Ryza The Capacity Scheduler documents the yarn.scheduler.capacity.root.queue-path.acl_administer_queue config option for controlling who can administer a queue, but it is not hooked up to anything. The Fair Scheduler could make use of a similar option as well. This is a feature-parity regression from MR1. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-675) In YarnClient, pull AM logs on AM container failure
[ https://issues.apache.org/jira/browse/YARN-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699488#comment-13699488 ] Zhijie Shen commented on YARN-675: -- Checked YarnClient and found one issue: ContainerId is not directly accessible from the aspect of YarnClient. Correct me if I'm wrong here. One way to walk around is to use the RESTful API to request either AppInfo or AppAttemptInfo from RMWebServices. It contains the url to the AM container log. Then, we can use this url to pull the log. Currently, this url is pointing to a webpage. After YARN-649 gets fixed, I'd like to update it to pointing the RESTful API of obtaining the container log, because IMHO, it's enough for the DAO object to just hold the log content, which is independent of rendering. Thoughts, please. In YarnClient, pull AM logs on AM container failure --- Key: YARN-675 URL: https://issues.apache.org/jira/browse/YARN-675 Project: Hadoop YARN Issue Type: Sub-task Components: client Affects Versions: 2.0.4-alpha Reporter: Sandy Ryza Assignee: Zhijie Shen Similar to MAPREDUCE-4362, when an AM container fails, it would be helpful to pull its logs from the NM to the client so that they can be displayed immediately to the user. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-675) In YarnClient, pull AM logs on AM container failure
[ https://issues.apache.org/jira/browse/YARN-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699497#comment-13699497 ] Sandy Ryza commented on YARN-675: - Would there be an issue with including the AM container ID in ApplicationReport? In YarnClient, pull AM logs on AM container failure --- Key: YARN-675 URL: https://issues.apache.org/jira/browse/YARN-675 Project: Hadoop YARN Issue Type: Sub-task Components: client Affects Versions: 2.0.4-alpha Reporter: Sandy Ryza Assignee: Zhijie Shen Similar to MAPREDUCE-4362, when an AM container fails, it would be helpful to pull its logs from the NM to the client so that they can be displayed immediately to the user. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (YARN-900) YarnClientApplication uses composition to hold GetNewApplicationResponse instead of having a simpler flattened structure
Hitesh Shah created YARN-900: Summary: YarnClientApplication uses composition to hold GetNewApplicationResponse instead of having a simpler flattened structure Key: YARN-900 URL: https://issues.apache.org/jira/browse/YARN-900 Project: Hadoop YARN Issue Type: Bug Reporter: Hitesh Shah Instead of YarnClientApplication have apis like getApplicationId, getMaximumResourceCapability, etc - it currently holds a GetNewApplicationResponse object. It might be simpler to get rid of GetNewApplicationResponse and return a more well-suited object both at the client as well from over the rpc layer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-900) YarnClientApplication uses composition to hold GetNewApplicationResponse instead of having a simpler flattened structure
[ https://issues.apache.org/jira/browse/YARN-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699533#comment-13699533 ] Karthik Kambatla commented on YARN-900: --- Will this be an incompatible change? YarnClientApplication uses composition to hold GetNewApplicationResponse instead of having a simpler flattened structure Key: YARN-900 URL: https://issues.apache.org/jira/browse/YARN-900 Project: Hadoop YARN Issue Type: Bug Reporter: Hitesh Shah Instead of YarnClientApplication have apis like getApplicationId, getMaximumResourceCapability, etc - it currently holds a GetNewApplicationResponse object. It might be simpler to get rid of GetNewApplicationResponse and return a more well-suited object both at the client as well from over the rpc layer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter
[ https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-727: --- Attachment: YARN-727.17.patch ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter Key: YARN-727 URL: https://issues.apache.org/jira/browse/YARN-727 Project: Hadoop YARN Issue Type: Sub-task Affects Versions: 2.1.0-beta Reporter: Siddharth Seth Assignee: Xuan Gong Priority: Blocker Attachments: YARN-727.10.patch, YARN-727.11.patch, YARN-727.12.patch, YARN-727.13.patch, YARN-727.14.patch, YARN-727.15.patch, YARN-727.16.patch, YARN-727.17.patch, YARN-727.1.patch, YARN-727.2.patch, YARN-727.3.patch, YARN-727.4.patch, YARN-727.5.patch, YARN-727.6.patch, YARN-727.7.patch, YARN-727.8.patch, YARN-727.9.patch Now that an ApplicationType is registered on ApplicationSubmission, getAllApplications should be able to use this string to query for a specific application type. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-791) Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API
[ https://issues.apache.org/jira/browse/YARN-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699534#comment-13699534 ] Hadoop QA commented on YARN-791: {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12590720/YARN-791-7.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 5 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/1421//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1421//console This message is automatically generated. Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API - Key: YARN-791 URL: https://issues.apache.org/jira/browse/YARN-791 Project: Hadoop YARN Issue Type: Sub-task Components: api, resourcemanager Affects Versions: 2.0.4-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza Priority: Blocker Attachments: YARN-791-1.patch, YARN-791-2.patch, YARN-791-3.patch, YARN-791-4.patch, YARN-791-5.patch, YARN-791-6.patch, YARN-791-7.patch, YARN-791.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter
[ https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699535#comment-13699535 ] Xuan Gong commented on YARN-727: 1.Change API from GetAllApplicationsRequest to GentApplicationsRequest, including the related name in proto file 2.Change API from GetAllApplicationsResponse to GentApplicationsRespone, including the related name in proto file 3.change function name ApplicationClientProtocol::getAllApplications to ApplicationClientProtocol::getApplications 4.all Hitesh's comment ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter Key: YARN-727 URL: https://issues.apache.org/jira/browse/YARN-727 Project: Hadoop YARN Issue Type: Sub-task Affects Versions: 2.1.0-beta Reporter: Siddharth Seth Assignee: Xuan Gong Priority: Blocker Attachments: YARN-727.10.patch, YARN-727.11.patch, YARN-727.12.patch, YARN-727.13.patch, YARN-727.14.patch, YARN-727.15.patch, YARN-727.16.patch, YARN-727.17.patch, YARN-727.1.patch, YARN-727.2.patch, YARN-727.3.patch, YARN-727.4.patch, YARN-727.5.patch, YARN-727.6.patch, YARN-727.7.patch, YARN-727.8.patch, YARN-727.9.patch Now that an ApplicationType is registered on ApplicationSubmission, getAllApplications should be able to use this string to query for a specific application type. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-675) In YarnClient, pull AM logs on AM container failure
[ https://issues.apache.org/jira/browse/YARN-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699538#comment-13699538 ] Zhijie Shen commented on YARN-675: -- It's feasible to pass ContainerId through ApplicationReport, but I'm a bit conservative of doing API change at this point, specially when the ContainerId is added only for pulling the log. How do you think? [~sandyr], BTW, the URL to the container log is constructed in AppInfo/AppAttemptInfo somewhat differently from what is done in YARN-649. {code} String url = join(HttpConfig.getSchemePrefix(), masterContainer.getNodeHttpAddress(), /node, /containerlogs/, ConverterUtils.toString(masterContainer.getId()), /, app.getUser()); {code} user is part of the url. If this is adopted, there's no need to get the user through request.getRemoteUser() In YarnClient, pull AM logs on AM container failure --- Key: YARN-675 URL: https://issues.apache.org/jira/browse/YARN-675 Project: Hadoop YARN Issue Type: Sub-task Components: client Affects Versions: 2.0.4-alpha Reporter: Sandy Ryza Assignee: Zhijie Shen Similar to MAPREDUCE-4362, when an AM container fails, it would be helpful to pull its logs from the NM to the client so that they can be displayed immediately to the user. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-369) Handle ( or throw a proper error when receiving) status updates from application masters that have not registered
[ https://issues.apache.org/jira/browse/YARN-369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mayank Bansal updated YARN-369: --- Attachment: YARN-369-trunk-3.patch Thanks [~bikash] for the review. I have rebased the patch and incorporated all your review comments Thanks, Mayank Handle ( or throw a proper error when receiving) status updates from application masters that have not registered - Key: YARN-369 URL: https://issues.apache.org/jira/browse/YARN-369 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.0.3-alpha, trunk-win Reporter: Hitesh Shah Assignee: Mayank Bansal Attachments: YARN-369.patch, YARN-369-trunk-1.patch, YARN-369-trunk-2.patch, YARN-369-trunk-3.patch Currently, an allocate call from an unregistered application is allowed and the status update for it throws a statemachine error that is silently dropped. org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: STATUS_UPDATE at LAUNCHED at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302) at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43) at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:445) at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:588) at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:99) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:471) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:452) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:130) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77) at java.lang.Thread.run(Thread.java:680) ApplicationMasterService should likely throw an appropriate error for applications' requests that should not be handled in such cases. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-369) Handle ( or throw a proper error when receiving) status updates from application masters that have not registered
[ https://issues.apache.org/jira/browse/YARN-369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699559#comment-13699559 ] Mayank Bansal commented on YARN-369: Sorry it was [~bikassaha] Thanks Handle ( or throw a proper error when receiving) status updates from application masters that have not registered - Key: YARN-369 URL: https://issues.apache.org/jira/browse/YARN-369 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.0.3-alpha, trunk-win Reporter: Hitesh Shah Assignee: Mayank Bansal Attachments: YARN-369.patch, YARN-369-trunk-1.patch, YARN-369-trunk-2.patch, YARN-369-trunk-3.patch Currently, an allocate call from an unregistered application is allowed and the status update for it throws a statemachine error that is silently dropped. org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: STATUS_UPDATE at LAUNCHED at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302) at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43) at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:445) at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:588) at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:99) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:471) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:452) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:130) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77) at java.lang.Thread.run(Thread.java:680) ApplicationMasterService should likely throw an appropriate error for applications' requests that should not be handled in such cases. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-813) Document and likewise implement relevant checks/escape functions for what form of input is acceptable for setting up a container launch context i.e. special chars in resour
[ https://issues.apache.org/jira/browse/YARN-813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-813: - Issue Type: Improvement (was: Bug) Document and likewise implement relevant checks/escape functions for what form of input is acceptable for setting up a container launch context i.e. special chars in resource names, env vars and the commands --- Key: YARN-813 URL: https://issues.apache.org/jira/browse/YARN-813 Project: Hadoop YARN Issue Type: Improvement Components: nodemanager Affects Versions: 2.0.4-alpha, 0.23.8 Reporter: Hitesh Shah What should a user of yarn escape/not escape when passing in input for the container launch contexts - localized resources' names are used to create symlinks. - Are special chars supported in symlinks or do they need to be escaped? - Likewise for environment variables and commands. Current implementation uses a shell script to setup the environment and launch the commands. What should the user be aware of when setting up the launch context? The input also should be such that a user should not need to change code based on what platform or flavor of shell is being used to setup the env and run the commands. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-56) Handle container requests that request more resources than currently available in the cluster
[ https://issues.apache.org/jira/browse/YARN-56?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-56: Issue Type: Improvement (was: Bug) Handle container requests that request more resources than currently available in the cluster - Key: YARN-56 URL: https://issues.apache.org/jira/browse/YARN-56 Project: Hadoop YARN Issue Type: Improvement Components: resourcemanager Affects Versions: 2.0.2-alpha, 0.23.3 Reporter: Hitesh Shah In heterogenous clusters, a simple check at the scheduler to check if the allocation request is within the max allocatable range is not enough. If there are large nodes in the cluster which are not available, there may be situations where some allocation requests will never be fulfilled. Need an approach to decide when to invalidate such requests. For application submissions, there will need to be a feedback loop for applications that could not be launched. For running AMs, AllocationResponse may need to augmented with information for invalidated/cancelled container requests. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-91) DFIP aka 'NodeManager should handle Disk-Failures In Place'
[ https://issues.apache.org/jira/browse/YARN-91?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-91: Issue Type: Task (was: Bug) DFIP aka 'NodeManager should handle Disk-Failures In Place' - Key: YARN-91 URL: https://issues.apache.org/jira/browse/YARN-91 Project: Hadoop YARN Issue Type: Task Components: nodemanager Reporter: Vinod Kumar Vavilapalli Moving stuff over from the MAPREDUCE JIRA: MAPREDUCE-3121 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-11) Capacity Scheduler does not support adding sub-queues to the existing queues.
[ https://issues.apache.org/jira/browse/YARN-11?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-11: Issue Type: Improvement (was: Bug) Capacity Scheduler does not support adding sub-queues to the existing queues. - Key: YARN-11 URL: https://issues.apache.org/jira/browse/YARN-11 Project: Hadoop YARN Issue Type: Improvement Reporter: Kiran BC Assignee: Kiran BC Attachments: MAPREDUCE-4524.1.patch, MAPREDUCE-4524.patch In-line to the issue, MAPREDUCE-3410, there should be a note stating that - Capacity Scheduler does not support adding sub-queues to the existing queue -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-34) Split/Cleanup YARN and MAPREDUCE documentation
[ https://issues.apache.org/jira/browse/YARN-34?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-34: Issue Type: Improvement (was: Bug) Split/Cleanup YARN and MAPREDUCE documentation -- Key: YARN-34 URL: https://issues.apache.org/jira/browse/YARN-34 Project: Hadoop YARN Issue Type: Improvement Reporter: Vinod Kumar Vavilapalli Assignee: Vinod Kumar Vavilapalli Post YARN-1, we need to have clear separation between YARN and mapreduce. We need to have separate sections on site and docs - we already have separate documents. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-34) Split/Cleanup YARN and MAPREDUCE documentation
[ https://issues.apache.org/jira/browse/YARN-34?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-34: Component/s: documentation Split/Cleanup YARN and MAPREDUCE documentation -- Key: YARN-34 URL: https://issues.apache.org/jira/browse/YARN-34 Project: Hadoop YARN Issue Type: Improvement Components: documentation Reporter: Vinod Kumar Vavilapalli Assignee: Vinod Kumar Vavilapalli Post YARN-1, we need to have clear separation between YARN and mapreduce. We need to have separate sections on site and docs - we already have separate documents. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-314) Schedulers should allow resource requests of different sizes at the same priority and location
[ https://issues.apache.org/jira/browse/YARN-314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-314: - Issue Type: Improvement (was: Bug) Schedulers should allow resource requests of different sizes at the same priority and location -- Key: YARN-314 URL: https://issues.apache.org/jira/browse/YARN-314 Project: Hadoop YARN Issue Type: Improvement Components: scheduler Affects Versions: 2.0.2-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza Fix For: 2.1.0-beta Currently, resource requests for the same container and locality are expected to all be the same size. While it it doesn't look like it's needed for apps currently, and can be circumvented by specifying different priorities if absolutely necessary, it seems to me that the ability to request containers with different resource requirements at the same priority level should be there for the future and for completeness sake. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (YARN-243) Job Client doesn't give progress for Application Master Retries
[ https://issues.apache.org/jira/browse/YARN-243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli resolved YARN-243. -- Resolution: Duplicate Closing as mentioned. Job Client doesn't give progress for Application Master Retries --- Key: YARN-243 URL: https://issues.apache.org/jira/browse/YARN-243 Project: Hadoop YARN Issue Type: Bug Components: client, resourcemanager Affects Versions: 2.0.2-alpha, 2.0.1-alpha Reporter: Devaraj K Assignee: Devaraj K If we configure the AM retries, if the first attempt fails then RM will create next attempt but Job Client doesn't give the progress for the retry attempts. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-463) Show explicitly excluded nodes on the UI
[ https://issues.apache.org/jira/browse/YARN-463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-463: - Issue Type: Improvement (was: Bug) Show explicitly excluded nodes on the UI Key: YARN-463 URL: https://issues.apache.org/jira/browse/YARN-463 Project: Hadoop YARN Issue Type: Improvement Reporter: Vinod Kumar Vavilapalli Labels: usability Nodes can be explicitly excluded via the config yarn.resourcemanager.nodes.exclude-path. We should have a way of displaying this list via web and command line UIs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-483) Improve documentation on log aggregation in yarn-default.xml
[ https://issues.apache.org/jira/browse/YARN-483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-483: - Issue Type: Improvement (was: Bug) Improve documentation on log aggregation in yarn-default.xml Key: YARN-483 URL: https://issues.apache.org/jira/browse/YARN-483 Project: Hadoop YARN Issue Type: Improvement Components: documentation Affects Versions: 2.0.3-alpha Reporter: Sandy Ryza The current documentation for log aggregation is {code} property descriptionWhether to enable log aggregation/description nameyarn.log-aggregation-enable/name valuefalse/value /property {code} This could be improved to explain what enabling log aggregation does. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-533) Pointing to the config property when throwing/logging the config-related exception
[ https://issues.apache.org/jira/browse/YARN-533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-533: - Issue Type: Improvement (was: Bug) Pointing to the config property when throwing/logging the config-related exception -- Key: YARN-533 URL: https://issues.apache.org/jira/browse/YARN-533 Project: Hadoop YARN Issue Type: Improvement Reporter: Zhijie Shen Assignee: Zhijie Shen When throwing/logging errors related to configiguration, we should always point to the configuration property to let users know which property needs to be changed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-369) Handle ( or throw a proper error when receiving) status updates from application masters that have not registered
[ https://issues.apache.org/jira/browse/YARN-369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699587#comment-13699587 ] Hadoop QA commented on YARN-369: {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12590738/YARN-369-trunk-3.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/1423//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1423//console This message is automatically generated. Handle ( or throw a proper error when receiving) status updates from application masters that have not registered - Key: YARN-369 URL: https://issues.apache.org/jira/browse/YARN-369 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.0.3-alpha, trunk-win Reporter: Hitesh Shah Assignee: Mayank Bansal Attachments: YARN-369.patch, YARN-369-trunk-1.patch, YARN-369-trunk-2.patch, YARN-369-trunk-3.patch Currently, an allocate call from an unregistered application is allowed and the status update for it throws a statemachine error that is silently dropped. org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: STATUS_UPDATE at LAUNCHED at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302) at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43) at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:445) at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:588) at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:99) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:471) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:452) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:130) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77) at java.lang.Thread.run(Thread.java:680) ApplicationMasterService should likely throw an appropriate error for applications' requests that should not be handled in such cases. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-531) RM nodes page should show time-since-last-heartbeat instead of absolute last-heartbeat time
[ https://issues.apache.org/jira/browse/YARN-531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-531: - Issue Type: Improvement (was: Bug) RM nodes page should show time-since-last-heartbeat instead of absolute last-heartbeat time --- Key: YARN-531 URL: https://issues.apache.org/jira/browse/YARN-531 Project: Hadoop YARN Issue Type: Improvement Reporter: Vinod Kumar Vavilapalli Assignee: Vinod Kumar Vavilapalli Labels: usability Absolute last-heartbeat time is absolutely useless ;) We need to replace it with time since last heartbeat. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-414) [Umbrella] Usability issues in YARN
[ https://issues.apache.org/jira/browse/YARN-414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-414: - Issue Type: Task (was: Bug) [Umbrella] Usability issues in YARN --- Key: YARN-414 URL: https://issues.apache.org/jira/browse/YARN-414 Project: Hadoop YARN Issue Type: Task Reporter: Hitesh Shah Umbrella jira to track all forms of usability issues in YARN that need to be addressed before YARN can be considered stable. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-446) Container killed before hprof dumps profile.out
[ https://issues.apache.org/jira/browse/YARN-446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-446: - Issue Type: Improvement (was: Bug) Container killed before hprof dumps profile.out --- Key: YARN-446 URL: https://issues.apache.org/jira/browse/YARN-446 Project: Hadoop YARN Issue Type: Improvement Components: client Affects Versions: 2.0.3-alpha Reporter: Radim Kolar If there is profiling enabled for mapper or reducer then hprof dumps profile.out at process exit. It is dumped after task signaled to AM that work is finished. AM kills container with finished work without waiting for hprof to finish dumps. If hprof is dumping larger outputs (such as with depth=4 while depth=3 works) , it could not finish dump in time before being killed making entire dump unusable because cpu and heap stats are missing. There needs to be better delay before container is killed if profiling is enabled. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-523) Container localization failures aren't reported from NM to RM
[ https://issues.apache.org/jira/browse/YARN-523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-523: - Attachment: YARN-523.patch added a test case to verify the diagnostics is reported Container localization failures aren't reported from NM to RM - Key: YARN-523 URL: https://issues.apache.org/jira/browse/YARN-523 Project: Hadoop YARN Issue Type: Sub-task Reporter: Vinod Kumar Vavilapalli Assignee: Omkar Vinit Joshi Attachments: YARN-523.patch This is mainly a pain on crashing AMs, but once we fix this, containers also can benefit - same fix for both. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-240) Rename ProcessTree.isSetsidAvailable
[ https://issues.apache.org/jira/browse/YARN-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-240: - Issue Type: Improvement (was: Bug) Rename ProcessTree.isSetsidAvailable Key: YARN-240 URL: https://issues.apache.org/jira/browse/YARN-240 Project: Hadoop YARN Issue Type: Improvement Affects Versions: trunk-win Reporter: Bikas Saha Assignee: Bikas Saha The logical use of this member is to find out if processes can be grouped into a unit for process manipulation. eg. killing process groups etc. setsid is the Linux implementation and it leaks into the name. I suggest renaming it to isProcessGroupAvailable. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-816) Implement AM recovery for distributed shell
[ https://issues.apache.org/jira/browse/YARN-816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-816: - Issue Type: Improvement (was: Bug) Implement AM recovery for distributed shell --- Key: YARN-816 URL: https://issues.apache.org/jira/browse/YARN-816 Project: Hadoop YARN Issue Type: Improvement Components: applications/distributed-shell Reporter: Vinod Kumar Vavilapalli Simple recovery to just continue from where it left off is a good start. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-296) Resource Manager throws InvalidStateTransitonException: Invalid event: APP_ACCEPTED at RUNNING for RMAppImpl
[ https://issues.apache.org/jira/browse/YARN-296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699595#comment-13699595 ] Mayank Bansal commented on YARN-296: Thanks [~devaraj.k] for the review. Updated and rebased the patch. Thanks, Mayank Resource Manager throws InvalidStateTransitonException: Invalid event: APP_ACCEPTED at RUNNING for RMAppImpl Key: YARN-296 URL: https://issues.apache.org/jira/browse/YARN-296 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.0.2-alpha, 2.0.1-alpha Reporter: Devaraj K Assignee: Mayank Bansal Attachments: YARN-296-trunk-1.patch, YARN-296-trunk-2.patch {code:xml} 2012-12-28 11:14:47,671 ERROR org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Can't handle this event at current state org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: APP_ACCEPTED at RUNNING at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301) at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43) at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443) at org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:528) at org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:72) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:405) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:389) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75) at java.lang.Thread.run(Thread.java:662) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-296) Resource Manager throws InvalidStateTransitonException: Invalid event: APP_ACCEPTED at RUNNING for RMAppImpl
[ https://issues.apache.org/jira/browse/YARN-296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mayank Bansal updated YARN-296: --- Attachment: YARN-296-trunk-2.patch Resource Manager throws InvalidStateTransitonException: Invalid event: APP_ACCEPTED at RUNNING for RMAppImpl Key: YARN-296 URL: https://issues.apache.org/jira/browse/YARN-296 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.0.2-alpha, 2.0.1-alpha Reporter: Devaraj K Assignee: Mayank Bansal Attachments: YARN-296-trunk-1.patch, YARN-296-trunk-2.patch {code:xml} 2012-12-28 11:14:47,671 ERROR org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Can't handle this event at current state org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: APP_ACCEPTED at RUNNING at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301) at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43) at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443) at org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:528) at org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:72) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:405) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:389) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75) at java.lang.Thread.run(Thread.java:662) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter
[ https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699603#comment-13699603 ] Hadoop QA commented on YARN-727: {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12590736/YARN-727.17.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 6 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/1422//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1422//console This message is automatically generated. ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter Key: YARN-727 URL: https://issues.apache.org/jira/browse/YARN-727 Project: Hadoop YARN Issue Type: Sub-task Affects Versions: 2.1.0-beta Reporter: Siddharth Seth Assignee: Xuan Gong Priority: Blocker Attachments: YARN-727.10.patch, YARN-727.11.patch, YARN-727.12.patch, YARN-727.13.patch, YARN-727.14.patch, YARN-727.15.patch, YARN-727.16.patch, YARN-727.17.patch, YARN-727.1.patch, YARN-727.2.patch, YARN-727.3.patch, YARN-727.4.patch, YARN-727.5.patch, YARN-727.6.patch, YARN-727.7.patch, YARN-727.8.patch, YARN-727.9.patch Now that an ApplicationType is registered on ApplicationSubmission, getAllApplications should be able to use this string to query for a specific application type. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-523) Container localization failures aren't reported from NM to RM
[ https://issues.apache.org/jira/browse/YARN-523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699606#comment-13699606 ] Hadoop QA commented on YARN-523: {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12590746/YARN-523.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/1424//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1424//console This message is automatically generated. Container localization failures aren't reported from NM to RM - Key: YARN-523 URL: https://issues.apache.org/jira/browse/YARN-523 Project: Hadoop YARN Issue Type: Sub-task Reporter: Vinod Kumar Vavilapalli Assignee: Omkar Vinit Joshi Attachments: YARN-523.patch This is mainly a pain on crashing AMs, but once we fix this, containers also can benefit - same fix for both. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-296) Resource Manager throws InvalidStateTransitonException: Invalid event: APP_ACCEPTED at RUNNING for RMAppImpl
[ https://issues.apache.org/jira/browse/YARN-296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699616#comment-13699616 ] Hadoop QA commented on YARN-296: {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12590747/YARN-296-trunk-2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/1425//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1425//console This message is automatically generated. Resource Manager throws InvalidStateTransitonException: Invalid event: APP_ACCEPTED at RUNNING for RMAppImpl Key: YARN-296 URL: https://issues.apache.org/jira/browse/YARN-296 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.0.2-alpha, 2.0.1-alpha Reporter: Devaraj K Assignee: Mayank Bansal Attachments: YARN-296-trunk-1.patch, YARN-296-trunk-2.patch {code:xml} 2012-12-28 11:14:47,671 ERROR org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Can't handle this event at current state org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: APP_ACCEPTED at RUNNING at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301) at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43) at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443) at org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:528) at org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:72) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:405) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:389) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75) at java.lang.Thread.run(Thread.java:662) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira