[jira] [Commented] (YARN-5694) ZKRMStateStore should only start its verification thread when in HA failover is not embedded
[ https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550977#comment-15550977 ] Karthik Kambatla commented on YARN-5694: Yes. We shouldn't check for the type of leader election we use. Should we run the VerifyActiveStatus thread even if HA is not enabled? What if a user configures two RMs to use the same store but forgets to configure HA? > ZKRMStateStore should only start its verification thread when in HA failover > is not embedded > > > Key: YARN-5694 > URL: https://issues.apache.org/jira/browse/YARN-5694 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 3.0.0-alpha1 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-5694.001.patch, YARN-5694.branch-2.7.001.patch > > > There are two cases. In branch-2.7, the > {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when > using embedded or Curator failover. In branch-2.8, the > {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is > disabled, which makes no sense. Based on the JIRA that introduced that > change (YARN-4559), I believe the intent was to start it only when embedded > failover is disabled. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints
[ https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550930#comment-15550930 ] Rohith Sharma K S commented on YARN-5585: - bq. We also need to be *crystal clear* that timeline clients *must* provide the same prefix for all subsequent updates of the same entity. I cannot stress that point enough. Rohith, could you confirm that it is not an issue with Tez to provide the created time for any subsequent updates for Tez entities? This is very important point for TimelineClient users who wants to use prefixId. Even though I am in minority side of introducing *optional* prefixId, convinced myself to go ahead with it because of at least optionality(flexibility) is better than predefined storage specific sort order. And knowing the issue is with storage layer which trying to solve popping the issue up to API by providing an optionality prefix, which exposing flaw in API so that user can mess up the storage which result in inconsistent data while retrieving. I had offline talk with one of the Tez developer, and he is fine to provide prefixId. Some concerns expressed by him are, Firstly about multi JVM which makes application programmer to define new protocol for transferring prefixId. Secondly, what if users misses providing an prefixId in subsequent updates.? This will makes storage mess up with data stored in 2 different entry or it can be multiple entry. bq. I'm also realizing that we might have a bug in how we deal with entity id's. I would have thought that we store the entities in the reverse entity id order, but it appears that the entity id is encoded into the row key as is (EntityRowKey). Am I reading that right? If so, this is a bug to fix. Sorry I could not get much. Could you explain bit elaborately. Do you mean reversing the only entityId i.e if entityId is "12345" then "54321" OR row-key itself? bq. One other thing to deal with is the query by id. There, we need to be able to distinguish the case where the data do not have the prefix to begin with and that where data do. Ideally we would simply use the row key explicitly in the case of data that don't have the prefix to begin with. For those that do have the prefix, we cannot use the row key to fetch the row so we need to do something different. I don't think this was done in the current patch, but this is TBD. I was thinking to use same REST API for both by using SingleColumnFilter. One cons I see is table scan for all the entityType i.e reflect in read performance. Other comments, let me handle it. And also, I will create patch on YARN-5355 branch. > [Atsv2] Add a new filter fromId in REST endpoints > - > > Key: YARN-5585 > URL: https://issues.apache.org/jira/browse/YARN-5585 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S >Priority: Critical > Attachments: 0001-YARN-5585.patch, YARN-5585-workaround.patch, > YARN-5585.v0.patch > > > TimelineReader REST API's provides lot of filters to retrieve the > applications. Along with those, it would be good to add new filter i.e fromId > so that entities can be retrieved after the fromId. > Current Behavior : Default limit is set to 100. If there are 1000 entities > then REST call gives first/last 100 entities. How to retrieve next set of 100 > entities i.e 101 to 200 OR 900 to 801? > Example : If applications are stored database, app-1 app-2 ... app-10. > *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is > no way to achieve this. > So proposal is to have fromId in the filter like > *getApps?limit=5&=app-5* which gives list of apps from app-6 to > app-10. > Since ATS is targeting large number of entities storage, it is very common > use case to get next set of entities using fromId rather than querying all > the entites. This is very useful for pagination in web UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4477) FairScheduler: Handle condition which can result in an infinite loop in attemptScheduling.
[ https://issues.apache.org/jira/browse/YARN-4477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550895#comment-15550895 ] Ryan Williams commented on YARN-4477: - Looking at my notes, I think I was being fooled by copious log-spam of "Reservation Exceeds …" messages, which I thought were coming from an infinite loop that was printing that message, but in reality was just a symptom of the existence of some resource-requests that were too large for the RM to satisfy, resulting in it printing a ton of debug messages about it. > FairScheduler: Handle condition which can result in an infinite loop in > attemptScheduling. > -- > > Key: YARN-4477 > URL: https://issues.apache.org/jira/browse/YARN-4477 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Reporter: Tao Jie >Assignee: Tao Jie > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: YARN-4477.001.patch, YARN-4477.002.patch, > YARN-4477.003.patch, YARN-4477.004.patch > > > This problem is introduced by YARN-4270 which add limitation on reservation. > In FSAppAttempt.reserve(): > {code} > if (!reservationExceedsThreshold(node, type)) { > LOG.info("Making reservation: node=" + node.getNodeName() + > " app_id=" + getApplicationId()); > if (!alreadyReserved) { > getMetrics().reserveResource(getUser(), container.getResource()); > RMContainer rmContainer = > super.reserve(node, priority, null, container); > node.reserveResource(this, priority, rmContainer); > setReservation(node); > } else { > RMContainer rmContainer = node.getReservedContainer(); > super.reserve(node, priority, rmContainer, container); > node.reserveResource(this, priority, rmContainer); > setReservation(node); > } > } > {code} > If reservation over threshod, current node will not set reservation. > But in attemptScheduling in FairSheduler: > {code} > while (node.getReservedContainer() == null) { > boolean assignedContainer = false; > if (!queueMgr.getRootQueue().assignContainer(node).equals( > Resources.none())) { > assignedContainers++; > assignedContainer = true; > > } > > if (!assignedContainer) { break; } > if (!assignMultiple) { break; } > if ((assignedContainers >= maxAssign) && (maxAssign > 0)) { break; } > } > {code} > assignContainer(node) still return FairScheduler.CONTAINER_RESERVED, which not > equals to Resources.none(). > As a result, if multiple assign is enabled and maxAssign is unlimited, this > while loop would never break. > I suppose that assignContainer(node) should return Resource.none rather than > CONTAINER_RESERVED when the attempt doesn't take the reservation because of > the limitation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5667) Move HBase backend code in ATS v2 into its separate module
[ https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-5667: - Attachment: New module structure.png Uploading a screenshot of what the new source organization is like. All HBase-backend specific code have been moved into the new module. So have all hbase dependencies. > Move HBase backend code in ATS v2 into its separate module > --- > > Key: YARN-5667 > URL: https://issues.apache.org/jira/browse/YARN-5667 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: New module structure.png, part1.yarn5667.prelim.patch, > part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, > part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, > pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, > pt4.yarn5667.001.patch, pt5.yarn5667.001.patch, pt6.yarn5667.001.patch > > > The HBase backend code currently lives along with the core ATS v2 code in > hadoop-yarn-server-timelineservice module. Because Resource Manager depends > on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM > module on HBase modules is introduced (HBase backend is pluggable, so we do > not need to directly pull in HBase jars). > In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop > 3, we encountered a circular dependency during our builds between HBase2.0 > and Hadoop3 artifacts. > {code} > hadoop-mapreduce-client-common, hadoop-yarn-client, > hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, > hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, > hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common] > {code} > This jira proposes we move all HBase-backend-related code from > hadoop-yarn-server-timelineservice into its own module (possible name is > yarn-server-timelineservice-storage) so that core RM modules do not depend on > HBase modules any more. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5667) Move HBase backend code in ATS v2 into its separate module
[ https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-5667: - Attachment: pt6.yarn5667.001.patch > Move HBase backend code in ATS v2 into its separate module > --- > > Key: YARN-5667 > URL: https://issues.apache.org/jira/browse/YARN-5667 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: part1.yarn5667.prelim.patch, > part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, > part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, > pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, > pt4.yarn5667.001.patch, pt5.yarn5667.001.patch, pt6.yarn5667.001.patch > > > The HBase backend code currently lives along with the core ATS v2 code in > hadoop-yarn-server-timelineservice module. Because Resource Manager depends > on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM > module on HBase modules is introduced (HBase backend is pluggable, so we do > not need to directly pull in HBase jars). > In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop > 3, we encountered a circular dependency during our builds between HBase2.0 > and Hadoop3 artifacts. > {code} > hadoop-mapreduce-client-common, hadoop-yarn-client, > hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, > hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, > hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common] > {code} > This jira proposes we move all HBase-backend-related code from > hadoop-yarn-server-timelineservice into its own module (possible name is > yarn-server-timelineservice-storage) so that core RM modules do not depend on > HBase modules any more. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5667) Move HBase backend code in ATS v2 into its separate module
[ https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-5667: - Attachment: (was: pt6.yarn5667.001.patch) > Move HBase backend code in ATS v2 into its separate module > --- > > Key: YARN-5667 > URL: https://issues.apache.org/jira/browse/YARN-5667 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: part1.yarn5667.prelim.patch, > part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, > part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, > pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, > pt4.yarn5667.001.patch, pt5.yarn5667.001.patch > > > The HBase backend code currently lives along with the core ATS v2 code in > hadoop-yarn-server-timelineservice module. Because Resource Manager depends > on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM > module on HBase modules is introduced (HBase backend is pluggable, so we do > not need to directly pull in HBase jars). > In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop > 3, we encountered a circular dependency during our builds between HBase2.0 > and Hadoop3 artifacts. > {code} > hadoop-mapreduce-client-common, hadoop-yarn-client, > hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, > hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, > hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common] > {code} > This jira proposes we move all HBase-backend-related code from > hadoop-yarn-server-timelineservice into its own module (possible name is > yarn-server-timelineservice-storage) so that core RM modules do not depend on > HBase modules any more. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5667) Move HBase backend code in ATS v2 into its separate module
[ https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-5667: - Attachment: pt6.yarn5667.001.patch Upload another part to fix package names not following directory structure. > Move HBase backend code in ATS v2 into its separate module > --- > > Key: YARN-5667 > URL: https://issues.apache.org/jira/browse/YARN-5667 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: part1.yarn5667.prelim.patch, > part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, > part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, > pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, > pt4.yarn5667.001.patch, pt5.yarn5667.001.patch, pt6.yarn5667.001.patch > > > The HBase backend code currently lives along with the core ATS v2 code in > hadoop-yarn-server-timelineservice module. Because Resource Manager depends > on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM > module on HBase modules is introduced (HBase backend is pluggable, so we do > not need to directly pull in HBase jars). > In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop > 3, we encountered a circular dependency during our builds between HBase2.0 > and Hadoop3 artifacts. > {code} > hadoop-mapreduce-client-common, hadoop-yarn-client, > hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, > hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, > hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common] > {code} > This jira proposes we move all HBase-backend-related code from > hadoop-yarn-server-timelineservice into its own module (possible name is > yarn-server-timelineservice-storage) so that core RM modules do not depend on > HBase modules any more. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5667) Move HBase backend code in ATS v2 into its separate module
[ https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-5667: - Attachment: (was: pt6.yarn5667.001.patch) > Move HBase backend code in ATS v2 into its separate module > --- > > Key: YARN-5667 > URL: https://issues.apache.org/jira/browse/YARN-5667 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: part1.yarn5667.prelim.patch, > part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, > part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, > pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, > pt4.yarn5667.001.patch, pt5.yarn5667.001.patch > > > The HBase backend code currently lives along with the core ATS v2 code in > hadoop-yarn-server-timelineservice module. Because Resource Manager depends > on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM > module on HBase modules is introduced (HBase backend is pluggable, so we do > not need to directly pull in HBase jars). > In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop > 3, we encountered a circular dependency during our builds between HBase2.0 > and Hadoop3 artifacts. > {code} > hadoop-mapreduce-client-common, hadoop-yarn-client, > hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, > hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, > hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common] > {code} > This jira proposes we move all HBase-backend-related code from > hadoop-yarn-server-timelineservice into its own module (possible name is > yarn-server-timelineservice-storage) so that core RM modules do not depend on > HBase modules any more. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5667) Move HBase backend code in ATS v2 into its separate module
[ https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-5667: - Attachment: pt6.yarn5667.001.patch > Move HBase backend code in ATS v2 into its separate module > --- > > Key: YARN-5667 > URL: https://issues.apache.org/jira/browse/YARN-5667 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: part1.yarn5667.prelim.patch, > part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, > part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, > pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, > pt4.yarn5667.001.patch, pt5.yarn5667.001.patch, pt6.yarn5667.001.patch > > > The HBase backend code currently lives along with the core ATS v2 code in > hadoop-yarn-server-timelineservice module. Because Resource Manager depends > on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM > module on HBase modules is introduced (HBase backend is pluggable, so we do > not need to directly pull in HBase jars). > In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop > 3, we encountered a circular dependency during our builds between HBase2.0 > and Hadoop3 artifacts. > {code} > hadoop-mapreduce-client-common, hadoop-yarn-client, > hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, > hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, > hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common] > {code} > This jira proposes we move all HBase-backend-related code from > hadoop-yarn-server-timelineservice into its own module (possible name is > yarn-server-timelineservice-storage) so that core RM modules do not depend on > HBase modules any more. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5453) FairScheduler#update may skip update demand resource of child queue/app if current demand reached maxResource
[ https://issues.apache.org/jira/browse/YARN-5453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] sandflee updated YARN-5453: --- Attachment: YARN-5453.04.patch > FairScheduler#update may skip update demand resource of child queue/app if > current demand reached maxResource > - > > Key: YARN-5453 > URL: https://issues.apache.org/jira/browse/YARN-5453 > Project: Hadoop YARN > Issue Type: Bug >Reporter: sandflee >Assignee: sandflee > Attachments: YARN-5453.01.patch, YARN-5453.02.patch, > YARN-5453.03.patch, YARN-5453.04.patch > > > {code} > demand = Resources.createResource(0); > for (FSQueue childQueue : childQueues) { > childQueue.updateDemand(); > Resource toAdd = childQueue.getDemand(); > demand = Resources.add(demand, toAdd); > demand = Resources.componentwiseMin(demand, maxRes); > if (Resources.equals(demand, maxRes)) { > break; > } > } > {code} > if one singe queue's demand resource exceed maxRes, the other queue's demand > resource will not update. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5667) Move HBase backend code in ATS v2 into its separate module
[ https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-5667: - Attachment: (was: pt5.yarn5667.001.patch) > Move HBase backend code in ATS v2 into its separate module > --- > > Key: YARN-5667 > URL: https://issues.apache.org/jira/browse/YARN-5667 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: part1.yarn5667.prelim.patch, > part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, > part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, > pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, > pt4.yarn5667.001.patch, pt5.yarn5667.001.patch > > > The HBase backend code currently lives along with the core ATS v2 code in > hadoop-yarn-server-timelineservice module. Because Resource Manager depends > on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM > module on HBase modules is introduced (HBase backend is pluggable, so we do > not need to directly pull in HBase jars). > In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop > 3, we encountered a circular dependency during our builds between HBase2.0 > and Hadoop3 artifacts. > {code} > hadoop-mapreduce-client-common, hadoop-yarn-client, > hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, > hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, > hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common] > {code} > This jira proposes we move all HBase-backend-related code from > hadoop-yarn-server-timelineservice into its own module (possible name is > yarn-server-timelineservice-storage) so that core RM modules do not depend on > HBase modules any more. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5667) Move HBase backend code in ATS v2 into its separate module
[ https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-5667: - Attachment: (was: pt2.yarn5667.001.patch) > Move HBase backend code in ATS v2 into its separate module > --- > > Key: YARN-5667 > URL: https://issues.apache.org/jira/browse/YARN-5667 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: part1.yarn5667.prelim.patch, > part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, > part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, > pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, > pt4.yarn5667.001.patch, pt5.yarn5667.001.patch > > > The HBase backend code currently lives along with the core ATS v2 code in > hadoop-yarn-server-timelineservice module. Because Resource Manager depends > on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM > module on HBase modules is introduced (HBase backend is pluggable, so we do > not need to directly pull in HBase jars). > In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop > 3, we encountered a circular dependency during our builds between HBase2.0 > and Hadoop3 artifacts. > {code} > hadoop-mapreduce-client-common, hadoop-yarn-client, > hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, > hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, > hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common] > {code} > This jira proposes we move all HBase-backend-related code from > hadoop-yarn-server-timelineservice into its own module (possible name is > yarn-server-timelineservice-storage) so that core RM modules do not depend on > HBase modules any more. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5667) Move HBase backend code in ATS v2 into its separate module
[ https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-5667: - Attachment: (was: pt3.yarn5667.001.patch) > Move HBase backend code in ATS v2 into its separate module > --- > > Key: YARN-5667 > URL: https://issues.apache.org/jira/browse/YARN-5667 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: part1.yarn5667.prelim.patch, > part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, > part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, > pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, > pt4.yarn5667.001.patch, pt5.yarn5667.001.patch > > > The HBase backend code currently lives along with the core ATS v2 code in > hadoop-yarn-server-timelineservice module. Because Resource Manager depends > on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM > module on HBase modules is introduced (HBase backend is pluggable, so we do > not need to directly pull in HBase jars). > In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop > 3, we encountered a circular dependency during our builds between HBase2.0 > and Hadoop3 artifacts. > {code} > hadoop-mapreduce-client-common, hadoop-yarn-client, > hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, > hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, > hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common] > {code} > This jira proposes we move all HBase-backend-related code from > hadoop-yarn-server-timelineservice into its own module (possible name is > yarn-server-timelineservice-storage) so that core RM modules do not depend on > HBase modules any more. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5667) Move HBase backend code in ATS v2 into its separate module
[ https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-5667: - Attachment: (was: pt4.yarn5667.001.patch) > Move HBase backend code in ATS v2 into its separate module > --- > > Key: YARN-5667 > URL: https://issues.apache.org/jira/browse/YARN-5667 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: part1.yarn5667.prelim.patch, > part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, > part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, > pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, > pt4.yarn5667.001.patch, pt5.yarn5667.001.patch > > > The HBase backend code currently lives along with the core ATS v2 code in > hadoop-yarn-server-timelineservice module. Because Resource Manager depends > on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM > module on HBase modules is introduced (HBase backend is pluggable, so we do > not need to directly pull in HBase jars). > In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop > 3, we encountered a circular dependency during our builds between HBase2.0 > and Hadoop3 artifacts. > {code} > hadoop-mapreduce-client-common, hadoop-yarn-client, > hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, > hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, > hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common] > {code} > This jira proposes we move all HBase-backend-related code from > hadoop-yarn-server-timelineservice into its own module (possible name is > yarn-server-timelineservice-storage) so that core RM modules do not depend on > HBase modules any more. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5667) Move HBase backend code in ATS v2 into its separate module
[ https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-5667: - Attachment: (was: pt1.yarn5667.001.patch) > Move HBase backend code in ATS v2 into its separate module > --- > > Key: YARN-5667 > URL: https://issues.apache.org/jira/browse/YARN-5667 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: part1.yarn5667.prelim.patch, > part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, > part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, > pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, > pt4.yarn5667.001.patch, pt5.yarn5667.001.patch > > > The HBase backend code currently lives along with the core ATS v2 code in > hadoop-yarn-server-timelineservice module. Because Resource Manager depends > on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM > module on HBase modules is introduced (HBase backend is pluggable, so we do > not need to directly pull in HBase jars). > In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop > 3, we encountered a circular dependency during our builds between HBase2.0 > and Hadoop3 artifacts. > {code} > hadoop-mapreduce-client-common, hadoop-yarn-client, > hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, > hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, > hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common] > {code} > This jira proposes we move all HBase-backend-related code from > hadoop-yarn-server-timelineservice into its own module (possible name is > yarn-server-timelineservice-storage) so that core RM modules do not depend on > HBase modules any more. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5667) Move HBase backend code in ATS v2 into its separate module
[ https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-5667: - Attachment: pt5.yarn5667.001.patch pt4.yarn5667.001.patch pt3.yarn5667.001.patch pt2.yarn5667.001.patch pt1.yarn5667.001.patch I must have did something wrong when creating patches. The changes I made to trunk can be applied to YARN-5355 cleanly. Upload new patches based on YARN-5355. > Move HBase backend code in ATS v2 into its separate module > --- > > Key: YARN-5667 > URL: https://issues.apache.org/jira/browse/YARN-5667 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: part1.yarn5667.prelim.patch, > part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, > part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, > pt1.yarn5667.001.patch, pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, > pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, pt3.yarn5667.001.patch, > pt4.yarn5667.001.patch, pt4.yarn5667.001.patch, pt5.yarn5667.001.patch, > pt5.yarn5667.001.patch > > > The HBase backend code currently lives along with the core ATS v2 code in > hadoop-yarn-server-timelineservice module. Because Resource Manager depends > on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM > module on HBase modules is introduced (HBase backend is pluggable, so we do > not need to directly pull in HBase jars). > In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop > 3, we encountered a circular dependency during our builds between HBase2.0 > and Hadoop3 artifacts. > {code} > hadoop-mapreduce-client-common, hadoop-yarn-client, > hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, > hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, > hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common] > {code} > This jira proposes we move all HBase-backend-related code from > hadoop-yarn-server-timelineservice into its own module (possible name is > yarn-server-timelineservice-storage) so that core RM modules do not depend on > HBase modules any more. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5667) Move HBase backend code in ATS v2 into its separate module
[ https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-5667: - Attachment: pt5.yarn5667.001.patch pt4.yarn5667.001.patch pt3.yarn5667.001.patch pt2.yarn5667.001.patch pt1.yarn5667.001.patch I must have did something wrong when creating patches. The changes I made to trunk can be applied to YARN-5355 cleanly. Upload new patches based on YARN-5355. > Move HBase backend code in ATS v2 into its separate module > --- > > Key: YARN-5667 > URL: https://issues.apache.org/jira/browse/YARN-5667 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: part1.yarn5667.prelim.patch, > part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, > part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, > pt1.yarn5667.001.patch, pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, > pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, pt3.yarn5667.001.patch, > pt4.yarn5667.001.patch, pt4.yarn5667.001.patch, pt5.yarn5667.001.patch, > pt5.yarn5667.001.patch > > > The HBase backend code currently lives along with the core ATS v2 code in > hadoop-yarn-server-timelineservice module. Because Resource Manager depends > on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM > module on HBase modules is introduced (HBase backend is pluggable, so we do > not need to directly pull in HBase jars). > In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop > 3, we encountered a circular dependency during our builds between HBase2.0 > and Hadoop3 artifacts. > {code} > hadoop-mapreduce-client-common, hadoop-yarn-client, > hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, > hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, > hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common] > {code} > This jira proposes we move all HBase-backend-related code from > hadoop-yarn-server-timelineservice into its own module (possible name is > yarn-server-timelineservice-storage) so that core RM modules do not depend on > HBase modules any more. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5139) [Umbrella] Move YARN scheduler towards global scheduler
[ https://issues.apache.org/jira/browse/YARN-5139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550501#comment-15550501 ] Hadoop QA commented on YARN-5139: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 11 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 32s {color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 2 new + 1 unchanged - 2 fixed = 3 total (was 3) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 38s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 148 new + 1421 unchanged - 157 fixed = 1569 total (was 1578) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 10s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 11 new + 0 unchanged - 0 fixed = 11 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 20s {color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 3 new + 937 unchanged - 1 fixed = 940 total (was 938) {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 57s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 51m 20s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | | Nullcheck of node at line 1414 of value previously dereferenced in org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(PlacementSet, boolean) At CapacityScheduler.java:1414 of value previously dereferenced in org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(PlacementSet, boolean) At CapacityScheduler.java:[line 1414] | | | org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler$ResourceCommitterService.run() does not release lock on all exception paths At CapacityScheduler.java:on all exception paths At CapacityScheduler.java:[line 532] | | | Unread field:ContainerAllocation.java:[line 61] | | | Unused field:ContainerAllocation.java | | | Read of unwritten field demandingHostLocalNodes in org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.LocalityPlacementSet.getPreferredNodeIterator(PlacementSet) At
[jira] [Commented] (YARN-4477) FairScheduler: Handle condition which can result in an infinite loop in attemptScheduling.
[ https://issues.apache.org/jira/browse/YARN-4477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550502#comment-15550502 ] Tony Peng commented on YARN-4477: - I'm also getting this problem with assignMultiple=false. [~kasha] [~rdub] what was your offline discussion? > FairScheduler: Handle condition which can result in an infinite loop in > attemptScheduling. > -- > > Key: YARN-4477 > URL: https://issues.apache.org/jira/browse/YARN-4477 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Reporter: Tao Jie >Assignee: Tao Jie > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: YARN-4477.001.patch, YARN-4477.002.patch, > YARN-4477.003.patch, YARN-4477.004.patch > > > This problem is introduced by YARN-4270 which add limitation on reservation. > In FSAppAttempt.reserve(): > {code} > if (!reservationExceedsThreshold(node, type)) { > LOG.info("Making reservation: node=" + node.getNodeName() + > " app_id=" + getApplicationId()); > if (!alreadyReserved) { > getMetrics().reserveResource(getUser(), container.getResource()); > RMContainer rmContainer = > super.reserve(node, priority, null, container); > node.reserveResource(this, priority, rmContainer); > setReservation(node); > } else { > RMContainer rmContainer = node.getReservedContainer(); > super.reserve(node, priority, rmContainer, container); > node.reserveResource(this, priority, rmContainer); > setReservation(node); > } > } > {code} > If reservation over threshod, current node will not set reservation. > But in attemptScheduling in FairSheduler: > {code} > while (node.getReservedContainer() == null) { > boolean assignedContainer = false; > if (!queueMgr.getRootQueue().assignContainer(node).equals( > Resources.none())) { > assignedContainers++; > assignedContainer = true; > > } > > if (!assignedContainer) { break; } > if (!assignMultiple) { break; } > if ((assignedContainers >= maxAssign) && (maxAssign > 0)) { break; } > } > {code} > assignContainer(node) still return FairScheduler.CONTAINER_RESERVED, which not > equals to Resources.none(). > As a result, if multiple assign is enabled and maxAssign is unlimited, this > while loop would never break. > I suppose that assignContainer(node) should return Resource.none rather than > CONTAINER_RESERVED when the attempt doesn't take the reservation because of > the limitation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5667) Move HBase backend code in ATS v2 into its separate module
[ https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550498#comment-15550498 ] Haibo Chen commented on YARN-5667: -- My bad. I was working on the trunk branch. I will update the patches for YARN-5355 specifically. > Move HBase backend code in ATS v2 into its separate module > --- > > Key: YARN-5667 > URL: https://issues.apache.org/jira/browse/YARN-5667 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: part1.yarn5667.prelim.patch, > part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, > part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch > > > The HBase backend code currently lives along with the core ATS v2 code in > hadoop-yarn-server-timelineservice module. Because Resource Manager depends > on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM > module on HBase modules is introduced (HBase backend is pluggable, so we do > not need to directly pull in HBase jars). > In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop > 3, we encountered a circular dependency during our builds between HBase2.0 > and Hadoop3 artifacts. > {code} > hadoop-mapreduce-client-common, hadoop-yarn-client, > hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, > hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, > hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common] > {code} > This jira proposes we move all HBase-backend-related code from > hadoop-yarn-server-timelineservice into its own module (possible name is > yarn-server-timelineservice-storage) so that core RM modules do not depend on > HBase modules any more. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5667) Move HBase backend code in ATS v2 into its separate module
[ https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550484#comment-15550484 ] Sangjin Lee commented on YARN-5667: --- [~haibochen], thanks for your patches! I just applied your patches in order, but am unable to get a clean build. Perhaps some files are missing in the patches? Could you try on a clean YARN-5355 branch and see if the patches are correct? > Move HBase backend code in ATS v2 into its separate module > --- > > Key: YARN-5667 > URL: https://issues.apache.org/jira/browse/YARN-5667 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: part1.yarn5667.prelim.patch, > part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, > part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch > > > The HBase backend code currently lives along with the core ATS v2 code in > hadoop-yarn-server-timelineservice module. Because Resource Manager depends > on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM > module on HBase modules is introduced (HBase backend is pluggable, so we do > not need to directly pull in HBase jars). > In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop > 3, we encountered a circular dependency during our builds between HBase2.0 > and Hadoop3 artifacts. > {code} > hadoop-mapreduce-client-common, hadoop-yarn-client, > hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, > hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, > hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common] > {code} > This jira proposes we move all HBase-backend-related code from > hadoop-yarn-server-timelineservice into its own module (possible name is > yarn-server-timelineservice-storage) so that core RM modules do not depend on > HBase modules any more. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5677) RM can be in active-active state for an extended period
[ https://issues.apache.org/jira/browse/YARN-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550481#comment-15550481 ] Hadoop QA commented on YARN-5677: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 19s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m 16s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 3s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831836/YARN-5677.003.patch | | JIRA Issue | YARN-5677 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 2d8435242594 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e68c7b9 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/13300/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/13300/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > RM can be in active-active state for an extended period > --- > > Key: YARN-5677 > URL: https://issues.apache.org/jira/browse/YARN-5677 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 3.0.0-alpha1 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Attachments: YARN-5677.001.patch,
[jira] [Commented] (YARN-5667) Move HBase backend code in ATS v2 into its separate module
[ https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550424#comment-15550424 ] Haibo Chen commented on YARN-5667: -- Thanks for your quick reviews [~vrushalic]. I did not change the hbase version, so I believe I must have been testing against hbase 1.1.3. In terms of testing done, I enabled coprocessor in my hbase cluster and ATS v2 in my yarn cluster, ran a few mapreduce example jobs, and then verified the reader through a few flow and application-level REST requests. Everthing seems to have worked for me. I could see the metrics for a flow run as well (Is this what you mean by " Did you recollect the counters aggregated to the flow level"?) I will update with a layout of what the file structure is like before and after my change. > Move HBase backend code in ATS v2 into its separate module > --- > > Key: YARN-5667 > URL: https://issues.apache.org/jira/browse/YARN-5667 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: part1.yarn5667.prelim.patch, > part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, > part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch > > > The HBase backend code currently lives along with the core ATS v2 code in > hadoop-yarn-server-timelineservice module. Because Resource Manager depends > on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM > module on HBase modules is introduced (HBase backend is pluggable, so we do > not need to directly pull in HBase jars). > In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop > 3, we encountered a circular dependency during our builds between HBase2.0 > and Hadoop3 artifacts. > {code} > hadoop-mapreduce-client-common, hadoop-yarn-client, > hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, > hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, > hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common] > {code} > This jira proposes we move all HBase-backend-related code from > hadoop-yarn-server-timelineservice into its own module (possible name is > yarn-server-timelineservice-storage) so that core RM modules do not depend on > HBase modules any more. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5702) Refactor TestPBImplRecords so that we can reuse for testing protocol records in other YARN modules
[ https://issues.apache.org/jira/browse/YARN-5702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated YARN-5702: - Issue Type: Sub-task (was: Improvement) Parent: YARN-2915 > Refactor TestPBImplRecords so that we can reuse for testing protocol records > in other YARN modules > -- > > Key: YARN-5702 > URL: https://issues.apache.org/jira/browse/YARN-5702 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Subru Krishnan >Assignee: Subru Krishnan > Fix For: 2.9.0, 3.0.0-alpha2 > > Attachments: YARN-5702-v1.patch, YARN-5702-v2.patch > > > The {{TestPBImplRecords}} has generic helper methods to validate YARN api > records. This JIRA proposes to refactor the generic helper methods into a > base class that can then be reused by other YARN modules for testing internal > API protocol records like in yarn-server-common for Federation (YARN-2915). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5139) [Umbrella] Move YARN scheduler towards global scheduler
[ https://issues.apache.org/jira/browse/YARN-5139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-5139: - Attachment: YARN-5139.000.patch Attached ver.000 patch and kick Jenkins > [Umbrella] Move YARN scheduler towards global scheduler > --- > > Key: YARN-5139 > URL: https://issues.apache.org/jira/browse/YARN-5139 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Wangda Tan >Assignee: Wangda Tan > Attachments: Explanantions of Global Scheduling (YARN-5139) > Implementation.pdf, YARN-5139-Concurrent-scheduling-performance-report.pdf, > YARN-5139-Global-Schedulingd-esign-and-implementation-notes-v2.pdf, > YARN-5139-Global-Schedulingd-esign-and-implementation-notes.pdf, > YARN-5139.000.patch, wip-1.YARN-5139.patch, wip-2.YARN-5139.patch, > wip-3.YARN-5139.patch, wip-4.YARN-5139.patch, wip-5.YARN-5139.patch > > > Existing YARN scheduler is based on node heartbeat. This can lead to > sub-optimal decisions because scheduler can only look at one node at the time > when scheduling resources. > Pseudo code of existing scheduling logic looks like: > {code} > for node in allNodes: >Go to parentQueue > Go to leafQueue > for application in leafQueue.applications: >for resource-request in application.resource-requests > try to schedule on node > {code} > Considering future complex resource placement requirements, such as node > constraints (give me "a && b || c") or anti-affinity (do not allocate HBase > regionsevers and Storm workers on the same host), we may need to consider > moving YARN scheduler towards global scheduling. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler
[ https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550275#comment-15550275 ] Hadoop QA commented on YARN-3139: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 8s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s {color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 13s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s {color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s {color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s {color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 7 new + 241 unchanged - 53 fixed = 248 total (was 294) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s {color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s {color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 28s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed with JDK v1.8.0_101. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m 20s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed with JDK v1.7.0_111. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 97m 32s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:b59b8b7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831820/YARN-3139.branch-2.007.patch | | JIRA Issue | YARN-3139 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | |
[jira] [Updated] (YARN-5677) RM can be in active-active state for an extended period
[ https://issues.apache.org/jira/browse/YARN-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated YARN-5677: --- Attachment: YARN-5677.003.patch Here's a patch that adds tests. > RM can be in active-active state for an extended period > --- > > Key: YARN-5677 > URL: https://issues.apache.org/jira/browse/YARN-5677 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 3.0.0-alpha1 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Attachments: YARN-5677.001.patch, YARN-5677.002.patch, > YARN-5677.003.patch > > > In trunk, there is no maximum number of retries that I see. It appears the > connection will be retried forever, with the active never figuring out it's > no longer active. In my testing, the active-active state lasted almost 2 > hours with no sign of stopping before I killed it. The solution appears to > be to cap the number of retries or amount of time spent retrying. > This issue is significant because of the asynchronous nature of job > submission. If the active doesn't know it's not active, it will buffer up > job submissions until it finally realizes it has become the standby. Then it > will fail all the job submissions in bulk. In high-volume workflows, that > behavior can create huge mass job failures. > This issue is also important because the node managers will not fail over to > the new active until the old active realizes it's the standby. Workloads > submitted after the old active loses contact with ZK will therefore fail to > be executed regardless of which RM the clients contact. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5388) MAPREDUCE-6719 requires changes to DockerContainerExecutor
[ https://issues.apache.org/jira/browse/YARN-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550194#comment-15550194 ] Daniel Templeton commented on YARN-5388: The new javadoc warnings are because the class is now deprecated. > MAPREDUCE-6719 requires changes to DockerContainerExecutor > -- > > Key: YARN-5388 > URL: https://issues.apache.org/jira/browse/YARN-5388 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Fix For: 2.9.0 > > Attachments: YARN-5388.001.patch, YARN-5388.002.patch, > YARN-5388.branch-2.001.patch, YARN-5388.branch-2.002.patch > > > Because the {{DockerContainerExecuter}} overrides the {{writeLaunchEnv()}} > method, it must also have the wildcard processing logic from > YARN-4958/YARN-5373 added to it. Without it, the use of -libjars will fail > unless wildcarding is disabled. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5702) Refactor TestPBImplRecords so that we can reuse for testing protocol records in other YARN modules
[ https://issues.apache.org/jira/browse/YARN-5702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550237#comment-15550237 ] Subru Krishnan commented on YARN-5702: -- Thanks Chris! > Refactor TestPBImplRecords so that we can reuse for testing protocol records > in other YARN modules > -- > > Key: YARN-5702 > URL: https://issues.apache.org/jira/browse/YARN-5702 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Subru Krishnan >Assignee: Subru Krishnan > Fix For: 2.9.0, 3.0.0-alpha2 > > Attachments: YARN-5702-v1.patch, YARN-5702-v2.patch > > > The {{TestPBImplRecords}} has generic helper methods to validate YARN api > records. This JIRA proposes to refactor the generic helper methods into a > base class that can then be reused by other YARN modules for testing internal > API protocol records like in yarn-server-common for Federation (YARN-2915). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints
[ https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550218#comment-15550218 ] Sangjin Lee commented on YARN-5585: --- Thanks [~rohithsharma] for contributing the initial patch for this! I have some high level comments on this and a couple of specific ones. (1) I think the patch does this, but it would be great to leave this prefix generic. For Tez, I'm assuming the (inverted) created time would be it. For others, it might be something different (something that can be provided easily). I think it is useful to have that flexibility. More importantly, it should be *optional*. Any framework (and the YARN-generic ones) should be able to skip the prefix and expect things to be sorted by the entity id order. I think the patch reflects both, but wanted to clarify. (2) We also need to be *crystal clear* that timeline clients *must* provide the same prefix for all subsequent updates of the same entity. I cannot stress that point enough. Rohith, could you confirm that it is not an issue with Tez to provide the created time for any subsequent updates for Tez entities? (3) I'm also realizing that we might have a bug in how we deal with entity id's. I would have thought that we store the entities in the *reverse* entity id order, but it appears that the entity id is encoded into the row key as is ({{EntityRowKey}}). Am I reading that right? If so, this is a bug to fix. (4) I agree with Varun that users should provide already inverted values. Users can call {{LongConverter.invertLong(createdTime)}} to give us inverted values. We also need to make this explicit in the javadoc. (5) I also agree with Varun that we need not store the prefix (again) as a column. It would be part of the row key, and as such we should have no problem reading it, right? (6) One other thing to deal with is the query by id. There, we need to be able to distinguish the case where the data do not have the prefix to begin with and that where data do. Ideally we would simply use the row key explicitly in the case of data that don't have the prefix to begin with. For those that do have the prefix, we cannot use the row key to fetch the row so we need to do something different. I don't think this was done in the current patch, but this is TBD. (7) Since this is a subtask for YARN-5355, can we base the patch on that feature branch? Thanks! > [Atsv2] Add a new filter fromId in REST endpoints > - > > Key: YARN-5585 > URL: https://issues.apache.org/jira/browse/YARN-5585 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S >Priority: Critical > Attachments: 0001-YARN-5585.patch, YARN-5585-workaround.patch, > YARN-5585.v0.patch > > > TimelineReader REST API's provides lot of filters to retrieve the > applications. Along with those, it would be good to add new filter i.e fromId > so that entities can be retrieved after the fromId. > Current Behavior : Default limit is set to 100. If there are 1000 entities > then REST call gives first/last 100 entities. How to retrieve next set of 100 > entities i.e 101 to 200 OR 900 to 801? > Example : If applications are stored database, app-1 app-2 ... app-10. > *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is > no way to achieve this. > So proposal is to have fromId in the filter like > *getApps?limit=5&=app-5* which gives list of apps from app-6 to > app-10. > Since ATS is targeting large number of entities storage, it is very common > use case to get next set of entities using fromId rather than querying all > the entites. This is very useful for pagination in web UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy
[ https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550216#comment-15550216 ] Eric Payne commented on YARN-2009: -- Thanks, [~sunilg], for the new patch. Preemption for the purposes of preventing priority inversion seems to work now without unneeded preemption. However, in this new patch, user-limit-percent preemption doesn't seem to be working. If: # {{user1}} starts {{app1}} at {{priority1}} on {{Queue1}} and consumes the entire queue # {{user2}} starts {{app2}} at {{priority1}} on {{Queue1}} preemption does not happen. I will continue to investigate, but I thought I would let you know. > Priority support for preemption in ProportionalCapacityPreemptionPolicy > --- > > Key: YARN-2009 > URL: https://issues.apache.org/jira/browse/YARN-2009 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler >Reporter: Devaraj K >Assignee: Sunil G > Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, > YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch > > > While preempting containers based on the queue ideal assignment, we may need > to consider preempting the low priority application containers first. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5388) MAPREDUCE-6719 requires changes to DockerContainerExecutor
[ https://issues.apache.org/jira/browse/YARN-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550158#comment-15550158 ] Hadoop QA commented on YARN-5388: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 51s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s {color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s {color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 54s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s {color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s {color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s {color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 20s {color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-jdk1.8.0_101 with JDK v1.8.0_101 generated 5 new + 17 unchanged - 0 fixed = 22 total (was 17) {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s {color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 24s {color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-jdk1.7.0_111 with JDK v1.7.0_111 generated 5 new + 19 unchanged - 0 fixed = 24 total (was 19) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s {color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s {color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 48s {color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with JDK v1.8.0_101. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 44s {color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with JDK v1.7.0_111. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 46m 4s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:b59b8b7 | | JIRA Patch URL |
[jira] [Created] (YARN-5712) WebAppProxyServlet is not passing the Authorization Header
Vijay Srinivasaraghavan created YARN-5712: - Summary: WebAppProxyServlet is not passing the Authorization Header Key: YARN-5712 URL: https://issues.apache.org/jira/browse/YARN-5712 Project: Hadoop YARN Issue Type: Bug Components: webapp, yarn Reporter: Vijay Srinivasaraghavan Scenario: 1) Deployed custom web application as Yarn application 2) Custom web application URL is exposed as the tracking URL 3) When user clicks the application link (Tracking URL) from Yarn RM UI, Yarn web proxy forwards the request to custom web application URL 4) Custom web app is handling basic AUTH and it expects Authorization header to allow user from moving forward. If authorization header is missing, then it will prompt the user to enter user ID and password (standard HTTP basic auth) 5) Yarn web proxy is not forwarding the Authorization header back to the custom web app (and hence the custom web app always prompts user for the credentials) Yarn web proxy currently supports few set of pass through headers while forwarding the request to the tracking URL of the container application (runtime web application deployed through Yarn) https://github.com/apache/hadoop/blob/2e1d0ff4e901b8313c8d71869735b94ed8bc40a0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxyServlet.java#L80 The runtime web application is expecting "Authorization" header to perform basic HTTP authentication but the Yarn proxy is not forwarding the header. I understand the security reason behind why limited headers are exposed, but in situations where additional headers need to be propogated, there should be an option to include them. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5704) Provide config knobs to control enabling/disabling new/work in progress features in container-executor
[ https://issues.apache.org/jira/browse/YARN-5704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550150#comment-15550150 ] Allen Wittenauer commented on YARN-5704: bq. I tested various enable/disable combinations manually on Centos 7.2. Why weren't tests added to test-container-executor? > Provide config knobs to control enabling/disabling new/work in progress > features in container-executor > -- > > Key: YARN-5704 > URL: https://issues.apache.org/jira/browse/YARN-5704 > Project: Hadoop YARN > Issue Type: Task > Components: yarn >Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Sidharta Seethana >Assignee: Sidharta Seethana > Attachments: YARN-5704.001.patch > > > Provide a mechanism to enable/disable Docker and TC (Traffic Control) > functionality at the container-executor level. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5706) Fail to launch SLSRunner due to NPE
[ https://issues.apache.org/jira/browse/YARN-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550145#comment-15550145 ] Allen Wittenauer commented on YARN-5706: It should be ${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_DIR}/sls/html . > Fail to launch SLSRunner due to NPE > --- > > Key: YARN-5706 > URL: https://issues.apache.org/jira/browse/YARN-5706 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: YARN-5706.01.patch > > > {code} > java.lang.NullPointerException > at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:88) > at > org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.initMetrics(SLSCapacityScheduler.java:459) > at > org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.setConf(SLSCapacityScheduler.java:153) > at > org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76) > {code} > CLASSPATH for html resource is not configured properly. > {code} > DEBUG: Injecting share/hadoop/tools/sls/html into CLASSPATH > DEBUG: Rejected CLASSPATH: share/hadoop/tools/sls/html (does not exist) > {code} > This issue can be reproduced when doing according to the documentation > instruction. > http://hadoop.apache.org/docs/current/hadoop-sls/SchedulerLoadSimulator.html > {code} > $ cd $HADOOP_ROOT/share/hadoop/tools/sls > $ bin/slsrun.sh > --input-rumen |--input-sls=> --output-dir= [--nodes=] > [--track-jobs= ] [--print-simulation] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-5706) Fail to launch SLSRunner due to NPE
[ https://issues.apache.org/jira/browse/YARN-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550145#comment-15550145 ] Allen Wittenauer edited comment on YARN-5706 at 10/5/16 10:27 PM: -- It should be {code} ${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_DIR}/sls/html {code} was (Author: aw): It should be ${HADOOP_TOOLS_HOME}/${HADOOP_TOOLS_DIR}/sls/html . > Fail to launch SLSRunner due to NPE > --- > > Key: YARN-5706 > URL: https://issues.apache.org/jira/browse/YARN-5706 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: YARN-5706.01.patch > > > {code} > java.lang.NullPointerException > at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:88) > at > org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.initMetrics(SLSCapacityScheduler.java:459) > at > org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.setConf(SLSCapacityScheduler.java:153) > at > org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76) > {code} > CLASSPATH for html resource is not configured properly. > {code} > DEBUG: Injecting share/hadoop/tools/sls/html into CLASSPATH > DEBUG: Rejected CLASSPATH: share/hadoop/tools/sls/html (does not exist) > {code} > This issue can be reproduced when doing according to the documentation > instruction. > http://hadoop.apache.org/docs/current/hadoop-sls/SchedulerLoadSimulator.html > {code} > $ cd $HADOOP_ROOT/share/hadoop/tools/sls > $ bin/slsrun.sh > --input-rumen |--input-sls=> --output-dir= [--nodes=] > [--track-jobs= ] [--print-simulation] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5677) RM can be in active-active state for an extended period
[ https://issues.apache.org/jira/browse/YARN-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550124#comment-15550124 ] Hadoop QA commented on YARN-5677: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 18s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 58s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 36m 44s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 53m 5s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831812/YARN-5677.002.patch | | JIRA Issue | YARN-5677 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 76069434e8e6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 00160f7 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/13297/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/13297/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > RM can be in active-active state for an extended period > --- > > Key: YARN-5677 > URL: https://issues.apache.org/jira/browse/YARN-5677 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 3.0.0-alpha1 >Reporter: Daniel
[jira] [Commented] (YARN-5711) AM cannot reconnect to RM after failover when using RequestHedgingRMFailoverProxyProvider
[ https://issues.apache.org/jira/browse/YARN-5711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550106#comment-15550106 ] Subru Krishnan commented on YARN-5711: -- Looking at the code, one fix I can think of is to refactor the *invoke* method to an *identify RM_IDs*. Then the actual connection to the selected RM_ID (current primary) can be made directly using the main thread as is done presently. [~jianhe], thoughts/suggestions? > AM cannot reconnect to RM after failover when using > RequestHedgingRMFailoverProxyProvider > - > > Key: YARN-5711 > URL: https://issues.apache.org/jira/browse/YARN-5711 > Project: Hadoop YARN > Issue Type: Bug > Components: applications, resourcemanager >Affects Versions: 2.9.0, 3.0.0-alpha1 >Reporter: Subru Krishnan >Priority: Critical > > When RM failsover, it does _not_ auto re-register running apps and so they > need to re-register when reconnecting to new primary. This is done by > catching {{ApplicationMasterNotRegisteredException}} in *allocate* calls and > re-registering. But *RequestHedgingRMFailoverProxyProvider* does _not_ > propagate {{YarnException}} as the actual invocation is done asynchronously > using seperate threads, so AMs cannot reconnect to RM after failover. > This JIRA proposes that the *RequestHedgingRMFailoverProxyProvider* propagate > any {{YarnException}} that it encounters. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5711) AM cannot reconnect to RM after failover when using RequestHedgingRMFailoverProxyProvider
[ https://issues.apache.org/jira/browse/YARN-5711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated YARN-5711: - Description: When RM failsover, it does _not_ auto re-register running apps and so they need to re-register when reconnecting to new primary. This is done by catching {{ApplicationMasterNotRegisteredException}} in *allocate* calls and re-registering. But *RequestHedgingRMFailoverProxyProvider* does _not_ propagate {{YarnException}} as the actual invocation is done asynchronously using seperate threads, so AMs cannot reconnect to RM after failover. This JIRA proposes that the *RequestHedgingRMFailoverProxyProvider* propagate any {{YarnException}} that it encounters. was: When RM failsover, it does _not_ auto re-register running apps and so they need to re-register when reconnecting to new primary. This is done by catching {{ApplicationMasterNotRegisteredException}} in *allocate* calls and re-registering. But *RequestHedgingRMFailoverProxyProvider* does _not_ propagate {{YarnException}} as the actual invocation is done asynchronously using seperate threads so AMs cannot reconnect to RM after failover. This JIRA proposes that the *RequestHedgingRMFailoverProxyProvider* propagate any {{YarnException}} that it encounters. > AM cannot reconnect to RM after failover when using > RequestHedgingRMFailoverProxyProvider > - > > Key: YARN-5711 > URL: https://issues.apache.org/jira/browse/YARN-5711 > Project: Hadoop YARN > Issue Type: Bug > Components: applications, resourcemanager >Affects Versions: 2.9.0, 3.0.0-alpha1 >Reporter: Subru Krishnan >Priority: Critical > > When RM failsover, it does _not_ auto re-register running apps and so they > need to re-register when reconnecting to new primary. This is done by > catching {{ApplicationMasterNotRegisteredException}} in *allocate* calls and > re-registering. But *RequestHedgingRMFailoverProxyProvider* does _not_ > propagate {{YarnException}} as the actual invocation is done asynchronously > using seperate threads, so AMs cannot reconnect to RM after failover. > This JIRA proposes that the *RequestHedgingRMFailoverProxyProvider* propagate > any {{YarnException}} that it encounters. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5711) AM cannot reconnect to RM after failover when using RequestHedgingRMFailoverProxyProvider
[ https://issues.apache.org/jira/browse/YARN-5711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated YARN-5711: - Description: When RM failsover, it does _not_ auto re-register running apps and so they need to re-register when reconnecting to new primary. This is done by catching {{ApplicationMasterNotRegisteredException}} in *allocate* calls and re-registering. But *RequestHedgingRMFailoverProxyProvider* does _not_ propagate {{YarnException}} as the actual invocation is done asynchronously using seperate threads so AMs cannot reconnect to RM after failover. This JIRA proposes that the *RequestHedgingRMFailoverProxyProvider* propagate any {{YarnException}} that it encounters. was: When RM failsover, it does _not_ auto re-register running apps and so they need to re-register when reconnecting to new primary. This is done by catching {{ApplicationMasterNotRegisteredException}} in *allocate* calls and re-registering. But *RequestHedgingRMFailoverProxyProvider* does _not_ propagate {{YarnException}} as the actual invocation is done asynchronously using seperate threads. This JIRA proposes that the *RequestHedgingRMFailoverProxyProvider* propagate any {{YarnException}} that it encounters. > AM cannot reconnect to RM after failover when using > RequestHedgingRMFailoverProxyProvider > - > > Key: YARN-5711 > URL: https://issues.apache.org/jira/browse/YARN-5711 > Project: Hadoop YARN > Issue Type: Bug > Components: applications, resourcemanager >Affects Versions: 2.9.0, 3.0.0-alpha1 >Reporter: Subru Krishnan >Priority: Critical > > When RM failsover, it does _not_ auto re-register running apps and so they > need to re-register when reconnecting to new primary. This is done by > catching {{ApplicationMasterNotRegisteredException}} in *allocate* calls and > re-registering. But *RequestHedgingRMFailoverProxyProvider* does _not_ > propagate {{YarnException}} as the actual invocation is done asynchronously > using seperate threads so AMs cannot reconnect to RM after failover. > This JIRA proposes that the *RequestHedgingRMFailoverProxyProvider* propagate > any {{YarnException}} that it encounters. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5711) AM cannot reconnect to RM after failover when using RequestHedgingRMFailoverProxyProvider
Subru Krishnan created YARN-5711: Summary: AM cannot reconnect to RM after failover when using RequestHedgingRMFailoverProxyProvider Key: YARN-5711 URL: https://issues.apache.org/jira/browse/YARN-5711 Project: Hadoop YARN Issue Type: Bug Components: applications, resourcemanager Affects Versions: 3.0.0-alpha1, 2.9.0 Reporter: Subru Krishnan Priority: Critical When RM failsover, it does _not_ auto re-register running apps and so they need to re-register when reconnecting to new primary. This is done by catching {{ApplicationMasterNotRegisteredException}} in *allocate* calls and re-registering. But *RequestHedgingRMFailoverProxyProvider* does _not_ propagate {{YarnException}} as the actual invocation is done asynchronously using seperate threads. This JIRA proposes that the *RequestHedgingRMFailoverProxyProvider* propagate any {{YarnException}} that it encounters. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5694) ZKRMStateStore should only start its verification thread when in HA failover is not embedded
[ https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15550046#comment-15550046 ] Daniel Templeton commented on YARN-5694: Based on offline conversation, it sounds like the enabled/embedded check should be removed as it rules out the use case of having a different ZK instance for the leader election as for the state store. Did I understand that correctly, [~kasha]? > ZKRMStateStore should only start its verification thread when in HA failover > is not embedded > > > Key: YARN-5694 > URL: https://issues.apache.org/jira/browse/YARN-5694 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 3.0.0-alpha1 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-5694.001.patch, YARN-5694.branch-2.7.001.patch > > > There are two cases. In branch-2.7, the > {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when > using embedded or Curator failover. In branch-2.8, the > {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is > disabled, which makes no sense. Based on the JIRA that introduced that > change (YARN-4559), I believe the intent was to start it only when embedded > failover is disabled. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler
[ https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-3139: - Attachment: YARN-3139.branch-2.007.patch Attached patch for branch-2 > Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler > -- > > Key: YARN-3139 > URL: https://issues.apache.org/jira/browse/YARN-3139 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager, scheduler >Reporter: Wangda Tan >Assignee: Wangda Tan > Attachments: YARN-3139.0.patch, YARN-3139.1.patch, YARN-3139.2.patch, > YARN-3139.3.patch, YARN-3139.4.patch, YARN-3139.5.patch, YARN-3139.6.patch, > YARN-3139.7.patch, YARN-3139.branch-2.007.patch > > > Enhance locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler, as > mentioned in YARN-3091, a possible solution is using read/write lock. Other > fine-graind locks for specific purposes / bugs should be addressed in > separated tickets. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5388) MAPREDUCE-6719 requires changes to DockerContainerExecutor
[ https://issues.apache.org/jira/browse/YARN-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated YARN-5388: --- Attachment: YARN-5388.branch-2.002.patch Here's a branch-2 patch with the {{@Deprecated}} tag and an explanation. > MAPREDUCE-6719 requires changes to DockerContainerExecutor > -- > > Key: YARN-5388 > URL: https://issues.apache.org/jira/browse/YARN-5388 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Fix For: 2.9.0 > > Attachments: YARN-5388.001.patch, YARN-5388.002.patch, > YARN-5388.branch-2.001.patch, YARN-5388.branch-2.002.patch > > > Because the {{DockerContainerExecuter}} overrides the {{writeLaunchEnv()}} > method, it must also have the wildcard processing logic from > YARN-4958/YARN-5373 added to it. Without it, the use of -libjars will fail > unless wildcarding is disabled. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5702) Refactor TestPBImplRecords so that we can reuse for testing protocol records in other YARN modules
[ https://issues.apache.org/jira/browse/YARN-5702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated YARN-5702: Fix Version/s: 2.9.0 > Refactor TestPBImplRecords so that we can reuse for testing protocol records > in other YARN modules > -- > > Key: YARN-5702 > URL: https://issues.apache.org/jira/browse/YARN-5702 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Subru Krishnan >Assignee: Subru Krishnan > Fix For: 2.9.0, 3.0.0-alpha2 > > Attachments: YARN-5702-v1.patch, YARN-5702-v2.patch > > > The {{TestPBImplRecords}} has generic helper methods to validate YARN api > records. This JIRA proposes to refactor the generic helper methods into a > base class that can then be reused by other YARN modules for testing internal > API protocol records like in yarn-server-common for Federation (YARN-2915). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5677) RM can be in active-active state for an extended period
[ https://issues.apache.org/jira/browse/YARN-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated YARN-5677: --- Attachment: YARN-5677.002.patch This patch addresses the race. I was not planning to tackle the {{ZKRMStateStore.VerifyActiveStatusThread}} issues in this patch. Let's work out the right thing to do on YARN-5694 and resolve it there. > RM can be in active-active state for an extended period > --- > > Key: YARN-5677 > URL: https://issues.apache.org/jira/browse/YARN-5677 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 3.0.0-alpha1 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Attachments: YARN-5677.001.patch, YARN-5677.002.patch > > > In trunk, there is no maximum number of retries that I see. It appears the > connection will be retried forever, with the active never figuring out it's > no longer active. In my testing, the active-active state lasted almost 2 > hours with no sign of stopping before I killed it. The solution appears to > be to cap the number of retries or amount of time spent retrying. > This issue is significant because of the asynchronous nature of job > submission. If the active doesn't know it's not active, it will buffer up > job submissions until it finally realizes it has become the standby. Then it > will fail all the job submissions in bulk. In high-volume workflows, that > behavior can create huge mass job failures. > This issue is also important because the node managers will not fail over to > the new active until the old active realizes it's the standby. Workloads > submitted after the old active loses contact with ZK will therefore fail to > be executed regardless of which RM the clients contact. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5047) Refactor nodeUpdate across schedulers
[ https://issues.apache.org/jira/browse/YARN-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15549956#comment-15549956 ] Hadoop QA commented on YARN-5047: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s {color} | {color:red} YARN-5047 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831764/YARN-5047.011.patch | | JIRA Issue | YARN-5047 | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/13296/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Refactor nodeUpdate across schedulers > - > > Key: YARN-5047 > URL: https://issues.apache.org/jira/browse/YARN-5047 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler, fairscheduler, scheduler >Affects Versions: 3.0.0-alpha1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: YARN-5047.001.patch, YARN-5047.002.patch, > YARN-5047.003.patch, YARN-5047.004.patch, YARN-5047.005.patch, > YARN-5047.006.patch, YARN-5047.007.patch, YARN-5047.008.patch, > YARN-5047.009.patch, YARN-5047.010.patch, YARN-5047.011.patch > > > FairScheduler#nodeUpdate() and CapacityScheduler#nodeUpdate() have a lot of > commonality in their code. See about refactoring the common parts into > AbstractYARNScheduler. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-2995) Enhance UI to show cluster resource utilization of various container types
[ https://issues.apache.org/jira/browse/YARN-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15549895#comment-15549895 ] Arun Suresh commented on YARN-2995: --- Thanks for the patch [~kkaranasos] Couple of comments: * Can we add javadoc for all the methods in {{OpportunisticContainersStatus}}. * {{opport_cores_used}} can be int32 instead of int64. * rename {{RMNodeStatusEvent::getContainerQueueInfo}} to {{getOpportunisticContainersInfo()}} > Enhance UI to show cluster resource utilization of various container types > -- > > Key: YARN-2995 > URL: https://issues.apache.org/jira/browse/YARN-2995 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Sriram Rao >Assignee: Konstantinos Karanasos > Attachments: YARN-2995.001.patch > > > This JIRA proposes to extend the Resource manager UI to show how cluster > resources are being used to run *guaranteed start* and *queueable* > containers. For example, a graph that shows over time, the fraction of > running containers that are *guaranteed start* and the fraction of running > containers that are *queueable*. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5659) getPathFromYarnURL should use standard methods
[ https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15549888#comment-15549888 ] Daniel Templeton commented on YARN-5659: LGTM. > getPathFromYarnURL should use standard methods > -- > > Key: YARN-5659 > URL: https://issues.apache.org/jira/browse/YARN-5659 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: YARN-5659.01.patch, YARN-5659.02.patch, > YARN-5659.03.patch, YARN-5659.04.patch, YARN-5659.04.patch, > YARN-5659.05.patch, YARN-5659.05.patch, YARN-5659.patch > > > getPathFromYarnURL does some string shenanigans where standard ctors should > suffice. > There are also bugs in it e.g. passing an empty scheme to the URI ctor is > invalid, null should be used. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5707) Add manager class for resource profiles
[ https://issues.apache.org/jira/browse/YARN-5707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15549865#comment-15549865 ] Arun Suresh commented on YARN-5707: --- Thanks for the updated patch. +1 > Add manager class for resource profiles > --- > > Key: YARN-5707 > URL: https://issues.apache.org/jira/browse/YARN-5707 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Attachments: YARN-5707-YARN-3926.001.patch, > YARN-5707-YARN-3926.002.patch, YARN-5707-YARN-3926.003.patch, > YARN-5707-YARN-3926.004.patch > > > Add a class that manages the resource profiles that are available for > applications to use. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4911) Bad placement policy in FairScheduler causes the RM to crash
[ https://issues.apache.org/jira/browse/YARN-4911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15549855#comment-15549855 ] Hadoop QA commented on YARN-4911: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 41s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 34s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 52m 54s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831798/YARN-4911.004.patch | | JIRA Issue | YARN-4911 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 6265a2a44716 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d65b957 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/13295/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/13295/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Bad placement policy in FairScheduler causes the RM to crash > > > Key: YARN-4911 > URL: https://issues.apache.org/jira/browse/YARN-4911 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Reporter: Ray Chiang >Assignee: Ray Chiang > Labels: supportability > Attachments: YARN-4911.001.patch, YARN-4911.002.patch, >
[jira] [Commented] (YARN-5256) Add REST endpoint to support detailed NodeLabel Informations
[ https://issues.apache.org/jira/browse/YARN-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15549782#comment-15549782 ] Naganarasimha G R commented on YARN-5256: - Thanks for the patch [~sunilg], almost there just that i wanted you to check whether any of the checkstyles can be addressed and also if you are addressing them hope you could change the parameter name from {{labels}} to {{partionLabels}} in RMWebServices lno 1204 (just good to have not mandatory...) > Add REST endpoint to support detailed NodeLabel Informations > > > Key: YARN-5256 > URL: https://issues.apache.org/jira/browse/YARN-5256 > Project: Hadoop YARN > Issue Type: Bug > Components: webapp >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-5256-YARN-3368.1.patch, > YARN-5256-YARN-3368.2.patch, YARN-5256.0001.patch, YARN-5256.0002.patch > > > Add a new REST endpoint to fetch few more detailed information about node > labels such as resource, list of nodes etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5667) Move HBase backend code in ATS v2 into its separate module
[ https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15549768#comment-15549768 ] Vrushali C commented on YARN-5667: -- Thanks [~haibochen], I took a quick look and it seems to be good. It does not look like you have updated the hbase version in this patch, right? Since you had mentioned you wanted to build with hbase 2.0. Am I correct in thinking you tested this with hbase 1.1.3? Also, there are a lot of files moved around, so wanted to confirm, the classes being moved are the ones primarily under hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/. If possible, could you share a listing of what remains in the hadoop-yarn-server-timelineservice/src/main? Also, did you enable the coprocessor on your pseudo cluster during your checks? Did you recollect the counters aggregated to the flow level? > Move HBase backend code in ATS v2 into its separate module > --- > > Key: YARN-5667 > URL: https://issues.apache.org/jira/browse/YARN-5667 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: part1.yarn5667.prelim.patch, > part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, > part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch > > > The HBase backend code currently lives along with the core ATS v2 code in > hadoop-yarn-server-timelineservice module. Because Resource Manager depends > on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM > module on HBase modules is introduced (HBase backend is pluggable, so we do > not need to directly pull in HBase jars). > In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop > 3, we encountered a circular dependency during our builds between HBase2.0 > and Hadoop3 artifacts. > {code} > hadoop-mapreduce-client-common, hadoop-yarn-client, > hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, > hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, > hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common] > {code} > This jira proposes we move all HBase-backend-related code from > hadoop-yarn-server-timelineservice into its own module (possible name is > yarn-server-timelineservice-storage) so that core RM modules do not depend on > HBase modules any more. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5702) Refactor TestPBImplRecords so that we can reuse for testing protocol records in other YARN modules
[ https://issues.apache.org/jira/browse/YARN-5702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15549740#comment-15549740 ] Hudson commented on YARN-5702: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10547 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10547/]) YARN-5702. Refactor TestPBImplRecords for reuse in other YARN modules. (cdouglas: rev d65b957776c4f055f82549a610fd1e8494580fe6) * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/BasePBImplRecordsTest.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPBImplRecords.java > Refactor TestPBImplRecords so that we can reuse for testing protocol records > in other YARN modules > -- > > Key: YARN-5702 > URL: https://issues.apache.org/jira/browse/YARN-5702 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Subru Krishnan >Assignee: Subru Krishnan > Attachments: YARN-5702-v1.patch, YARN-5702-v2.patch > > > The {{TestPBImplRecords}} has generic helper methods to validate YARN api > records. This JIRA proposes to refactor the generic helper methods into a > base class that can then be reused by other YARN modules for testing internal > API protocol records like in yarn-server-common for Federation (YARN-2915). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-4911) Bad placement policy in FairScheduler causes the RM to crash
[ https://issues.apache.org/jira/browse/YARN-4911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated YARN-4911: - Attachment: YARN-4911.004.patch - Fix checkstyle issue > Bad placement policy in FairScheduler causes the RM to crash > > > Key: YARN-4911 > URL: https://issues.apache.org/jira/browse/YARN-4911 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Reporter: Ray Chiang >Assignee: Ray Chiang > Labels: supportability > Attachments: YARN-4911.001.patch, YARN-4911.002.patch, > YARN-4911.003.patch, YARN-4911.004.patch > > > When you have a fair-scheduler.xml with the rule: > > > > and the queue okay1 doesn't exist, the following exception occurs in the RM: > 2016-04-01 16:56:33,383 FATAL > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in > handling event type APP_ADDED to the scheduler > java.lang.IllegalStateException: Should have applied a rule before reaching > here > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueuePlacementPolicy.assignAppToQueue(QueuePlacementPolicy.java:173) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.assignToQueue(FairScheduler.java:728) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.addApplication(FairScheduler.java:634) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1224) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:112) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:691) > at java.lang.Thread.run(Thread.java:745) > which causes the RM to crash. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5256) Add REST endpoint to support detailed NodeLabel Informations
[ https://issues.apache.org/jira/browse/YARN-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15549589#comment-15549589 ] Hadoop QA commented on YARN-5256: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 12 new + 46 unchanged - 0 fixed = 58 total (was 46) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 30s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 53m 45s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831784/YARN-5256.0002.patch | | JIRA Issue | YARN-5256 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 29d7b9bfb5b9 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d6be1e7 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/13294/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/13294/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/13294/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Add REST endpoint to support detailed NodeLabel Informations > > > Key: YARN-5256 > URL:
[jira] [Commented] (YARN-5698) [YARN-3368] Launch new YARN UI under hadoop web app port
[ https://issues.apache.org/jira/browse/YARN-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15549562#comment-15549562 ] Hadoop QA commented on YARN-5698: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 3m 51s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 56s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 28s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 17s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 46s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 52s {color} | {color:green} YARN-3368 passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 58s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s {color} | {color:green} YARN-3368 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 24s {color} | {color:red} hadoop-yarn-api in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 19s {color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 6s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s {color} | {color:green} hadoop-yarn-ui in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 75m 55s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:88ca7e4 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831700/YARN-5698-YARN-3368.0002.patch | | JIRA Issue | YARN-5698 | | Optional Tests | asflicense compile
[jira] [Updated] (YARN-4526) Make SystemClock singleton so AppSchedulingInfo could use it
[ https://issues.apache.org/jira/browse/YARN-4526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthik Kambatla updated YARN-4526: --- Fix Version/s: (was: 2.8.0) > Make SystemClock singleton so AppSchedulingInfo could use it > > > Key: YARN-4526 > URL: https://issues.apache.org/jira/browse/YARN-4526 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler >Affects Versions: 2.8.0 >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Fix For: 2.9.0, 3.0.0-alpha1 > > Attachments: yarn-4526-1.patch, yarn-4526-2.patch, yarn-4526-2.patch > > > To track the time a request is received, we need to get current system time. > For better testability of this, we are likely better off using a Clock > instance that uses SystemClock by default. Instead of creating umpteen > instances of SystemClock, we should just reuse the same instance. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-4526) Make SystemClock singleton so AppSchedulingInfo could use it
[ https://issues.apache.org/jira/browse/YARN-4526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthik Kambatla updated YARN-4526: --- Fix Version/s: 2.8.0 > Make SystemClock singleton so AppSchedulingInfo could use it > > > Key: YARN-4526 > URL: https://issues.apache.org/jira/browse/YARN-4526 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler >Affects Versions: 2.8.0 >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Fix For: 2.8.0, 2.9.0, 3.0.0-alpha1 > > Attachments: yarn-4526-1.patch, yarn-4526-2.patch, yarn-4526-2.patch > > > To track the time a request is received, we need to get current system time. > For better testability of this, we are likely better off using a Clock > instance that uses SystemClock by default. Instead of creating umpteen > instances of SystemClock, we should just reuse the same instance. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5256) Add REST endpoint to support detailed NodeLabel Informations
[ https://issues.apache.org/jira/browse/YARN-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-5256: -- Attachment: YARN-5256.0002.patch Thanks [~naganarasimha...@apache.org] for the comments. Attaching new patch addressing the comments. > Add REST endpoint to support detailed NodeLabel Informations > > > Key: YARN-5256 > URL: https://issues.apache.org/jira/browse/YARN-5256 > Project: Hadoop YARN > Issue Type: Bug > Components: webapp >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-5256-YARN-3368.1.patch, > YARN-5256-YARN-3368.2.patch, YARN-5256.0001.patch, YARN-5256.0002.patch > > > Add a new REST endpoint to fetch few more detailed information about node > labels such as resource, list of nodes etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5556) Support for deleting queues without requiring a RM restart
[ https://issues.apache.org/jira/browse/YARN-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15549384#comment-15549384 ] Naganarasimha Garla commented on YARN-5556: --- Sure working on the testcase will get it out today > Support for deleting queues without requiring a RM restart > -- > > Key: YARN-5556 > URL: https://issues.apache.org/jira/browse/YARN-5556 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Reporter: Xuan Gong >Assignee: Naganarasimha G R > Attachments: YARN-5556.v1.001.patch > > > Today, we could add or modify queues without restarting the RM, via a CS > refresh. But for deleting queue, we have to restart the ResourceManager. We > could support for deleting queues without requiring a RM restart -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5698) [YARN-3368] Launch new YARN UI under hadoop web app port
[ https://issues.apache.org/jira/browse/YARN-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15549351#comment-15549351 ] Sunil G commented on YARN-5698: --- Kicking jenkins again... > [YARN-3368] Launch new YARN UI under hadoop web app port > > > Key: YARN-5698 > URL: https://issues.apache.org/jira/browse/YARN-5698 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-5698-YARN-3368.0001.patch, > YARN-5698-YARN-3368.0002.patch > > > As discussed in YARN-5145, it will be better to launch new web ui as a new > webapp under same old port. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
[ https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15549363#comment-15549363 ] Sunil G commented on YARN-5145: --- With YARN-5698 we can host new ui under same old webapp port. [~lewuathe], will you be able to fetch new config from RM on top of that patch. If you dont have bandwidth, i could help. > [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR > - > > Key: YARN-5145 > URL: https://issues.apache.org/jira/browse/YARN-5145 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Kai Sasaki > Attachments: 0001-YARN-5145-Run-NewUI-WithOldPort-POC.patch, > YARN-5145-YARN-3368.01.patch, newUIInOldRMWebServer.png > > > Existing YARN UI configuration is under Hadoop package's directory: > $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to > $HADOOP_CONF_DIR like other configurations. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5556) Support for deleting queues without requiring a RM restart
[ https://issues.apache.org/jira/browse/YARN-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15549345#comment-15549345 ] Xuan Gong commented on YARN-5556: - [~Naganarasimha] Any updates ? > Support for deleting queues without requiring a RM restart > -- > > Key: YARN-5556 > URL: https://issues.apache.org/jira/browse/YARN-5556 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Reporter: Xuan Gong >Assignee: Naganarasimha G R > Attachments: YARN-5556.v1.001.patch > > > Today, we could add or modify queues without restarting the RM, via a CS > refresh. But for deleting queue, we have to restart the ResourceManager. We > could support for deleting queues without requiring a RM restart -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5101) YARN_APPLICATION_UPDATED event is parsed in ApplicationHistoryManagerOnTimelineStore#convertToApplicationReport with reversed order
[ https://issues.apache.org/jira/browse/YARN-5101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15549339#comment-15549339 ] Xuan Gong commented on YARN-5101: - Thanks for the patch, [~sunilg], and Thanks for the review, [~rohithsharma] The patch looks good to me. +1 > YARN_APPLICATION_UPDATED event is parsed in > ApplicationHistoryManagerOnTimelineStore#convertToApplicationReport with > reversed order > --- > > Key: YARN-5101 > URL: https://issues.apache.org/jira/browse/YARN-5101 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Xuan Gong >Assignee: Sunil G > Attachments: YARN-5101.0001.patch, YARN-5101.0002.patch > > > Right now, the application events are parsed in in > ApplicationHistoryManagerOnTimelineStore#convertToApplicationReport with > timestamp descending order, which means the later events would be parsed > first, and the previous same type of events would override the information. In > https://issues.apache.org/jira/browse/YARN-4044, we have introduced > YARN_APPLICATION_UPDATED events which might be submitted by RM multiple times > in one application life cycle. This could cause problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5047) Refactor nodeUpdate across schedulers
[ https://issues.apache.org/jira/browse/YARN-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated YARN-5047: - Attachment: YARN-5047.011.patch - Split nodeUpdate() into submethods - Fixed rs variable - Added JIRA to TODO comment - Can't fix last comment--would require duplicating the logic of the if statement or something much messier. > Refactor nodeUpdate across schedulers > - > > Key: YARN-5047 > URL: https://issues.apache.org/jira/browse/YARN-5047 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler, fairscheduler, scheduler >Affects Versions: 3.0.0-alpha1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: YARN-5047.001.patch, YARN-5047.002.patch, > YARN-5047.003.patch, YARN-5047.004.patch, YARN-5047.005.patch, > YARN-5047.006.patch, YARN-5047.007.patch, YARN-5047.008.patch, > YARN-5047.009.patch, YARN-5047.010.patch, YARN-5047.011.patch > > > FairScheduler#nodeUpdate() and CapacityScheduler#nodeUpdate() have a lot of > commonality in their code. See about refactoring the common parts into > AbstractYARNScheduler. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5699) Retrospect container entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15548904#comment-15548904 ] Naganarasimha G R commented on YARN-5699: - Thanks [~rohithsharma] & [~gtCarrera9] for detailing it out. bq. For example : container exist status, container state, createdTime, finished time. Pushing these kind of information *both the places* would be fine but IMO diagnostics kind of stuff need not be stored. Both the places to ensure existing clients need not be impacted. bq. Thirdly, event RM_CONTAINER_CREATED holds diagnostics, state and status. IIUC you mean container finished right ? In between (not related to this jira ) One more thing i was noticing was how will user know what are the different entity types present which needs to be queried, do we need to capture in our documentation ? For example MR entities, what are the different entity ID's to query ? Where are the MR Job configs present? how to access the MR job counters? Similarly for YARN system entities to query the resource usage and the containers stats(number of containers for different priority, time taken to finish etc ...) These were some of the queries which i have encountered from others... > Retrospect container entity fields which are publishing in events info fields. > -- > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5707) Add manager class for resource profiles
[ https://issues.apache.org/jira/browse/YARN-5707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15548487#comment-15548487 ] Hadoop QA commented on YARN-5707: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 7 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 6s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 33s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 3s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 58s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 47s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 39s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s {color} | {color:green} YARN-3926 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 8s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 47s {color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 5 new + 214 unchanged - 1 fixed = 219 total (was 215) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s {color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 28s {color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 37m 54s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 80m 41s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831719/YARN-5707-YARN-3926.004.patch | | JIRA Issue | YARN-5707 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 4b7ed234542c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-3926 / 0bc6696 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle |
[jira] [Comment Edited] (YARN-5699) Retrospect container entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15548475#comment-15548475 ] Varun Saxena edited comment on YARN-5699 at 10/5/16 11:51 AM: -- bq. For containers, some of the event info should be published at container info level. Agree with the intention behind this JIRA [~rohithsharma] . Some info which we may have to query and does not change per event i.e. this info is not repeated across events, we should have at entity info level. It will enable users to use info filters for it. Only info which is strictly specific to an event is what can be kept at event info level. This we can keep as the thumb rule for publishing info. However, we can also consider publishing it at both places on need basis (i.e. if and when we find a use case). was (Author: varun_saxena): bq. For containers, some of the event info should be published at container info level. Agree with the intention behind this JIRA [~rohithsharma] . Some info which we may have to query and does not change per event i.e. this info is not repeated across events, we should have at entity info level. Only info which is strictly specific to an event is what can be kept at event info level. This we can keep as the thumb rule for publishing info. However, we can also consider publishing it at both places on need basis (i.e. if and when we find a use case). > Retrospect container entity fields which are publishing in events info fields. > -- > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5699) Retrospect container entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15548475#comment-15548475 ] Varun Saxena commented on YARN-5699: bq. For containers, some of the event info should be published at container info level. Agree with the intention behind this JIRA [~rohithsharma] . Some info which we may have to query and does not change per event i.e. this info is not repeated across events, we should have at entity info level. Only info which is strictly specific to an event is what can be kept at event info level. This we can keep as the thumb rule for publishing info. However, we can also consider publishing it at both places on need basis (i.e. if and when we find a use case). > Retrospect container entity fields which are publishing in events info fields. > -- > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5699) Retrospect container entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15548447#comment-15548447 ] Rohith Sharma K S commented on YARN-5699: - When I raised JIRA, my concern was too as Li mentioned. We have lost ability to easily query yarn entities by adding few information in event info fields. Currently, ATS does not support querying for event-info filter. And even if ATS supports, it is again a burden for user to parse the information from different events. For example,YARN container has information 3 different places. Firstly, info field of entity about user, host detail, priority etc. Secondly, event RM_CONTAINER_CREATED holds information about container start time and other. Thirdly, event RM_CONTAINER_CREATED holds diagnostics, state and status. So, I think entity related information better to publish at entity info level. It is basically refactoring code. > Retrospect container entity fields which are publishing in events info fields. > -- > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5704) Provide config knobs to control enabling/disabling new/work in progress features in container-executor
[ https://issues.apache.org/jira/browse/YARN-5704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15548334#comment-15548334 ] Hudson commented on YARN-5704: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10543 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10543/]) YARN-5704. Provide config knobs to control enabling/disabling new/work (vvasudev: rev 0992708d790b5bd3dab85987b7ad7c6fc2cc4b18) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c > Provide config knobs to control enabling/disabling new/work in progress > features in container-executor > -- > > Key: YARN-5704 > URL: https://issues.apache.org/jira/browse/YARN-5704 > Project: Hadoop YARN > Issue Type: Task > Components: yarn >Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Sidharta Seethana >Assignee: Sidharta Seethana > Attachments: YARN-5704.001.patch > > > Provide a mechanism to enable/disable Docker and TC (Traffic Control) > functionality at the container-executor level. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5707) Add manager class for resource profiles
[ https://issues.apache.org/jira/browse/YARN-5707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Vasudev updated YARN-5707: Attachment: YARN-5707-YARN-3926.004.patch Uploaded a patch to fix the failing unit test. > Add manager class for resource profiles > --- > > Key: YARN-5707 > URL: https://issues.apache.org/jira/browse/YARN-5707 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Attachments: YARN-5707-YARN-3926.001.patch, > YARN-5707-YARN-3926.002.patch, YARN-5707-YARN-3926.003.patch, > YARN-5707-YARN-3926.004.patch > > > Add a class that manages the resource profiles that are available for > applications to use. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5704) Provide config knobs to control enabling/disabling new/work in progress features in container-executor
[ https://issues.apache.org/jira/browse/YARN-5704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15548288#comment-15548288 ] Varun Vasudev commented on YARN-5704: - [~sidharta-s] - the patch doesn't apply cleanly to branch-2.8 or branch-2.7.3. Can you please provide versions for them? I've committed it to trunk and branch-2. > Provide config knobs to control enabling/disabling new/work in progress > features in container-executor > -- > > Key: YARN-5704 > URL: https://issues.apache.org/jira/browse/YARN-5704 > Project: Hadoop YARN > Issue Type: Task > Components: yarn >Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Sidharta Seethana >Assignee: Sidharta Seethana > Attachments: YARN-5704.001.patch > > > Provide a mechanism to enable/disable Docker and TC (Traffic Control) > functionality at the container-executor level. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5704) Provide config knobs to control enabling/disabling new/work in progress features in container-executor
[ https://issues.apache.org/jira/browse/YARN-5704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15548278#comment-15548278 ] Varun Vasudev commented on YARN-5704: - +1. Committing to trunk and branch-2. > Provide config knobs to control enabling/disabling new/work in progress > features in container-executor > -- > > Key: YARN-5704 > URL: https://issues.apache.org/jira/browse/YARN-5704 > Project: Hadoop YARN > Issue Type: Task > Components: yarn >Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Sidharta Seethana >Assignee: Sidharta Seethana > Attachments: YARN-5704.001.patch > > > Provide a mechanism to enable/disable Docker and TC (Traffic Control) > functionality at the container-executor level. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5148) [YARN-3368] Add page to new YARN UI to view server side configurations/logs/JVM-metrics
[ https://issues.apache.org/jira/browse/YARN-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15548227#comment-15548227 ] Sunil G commented on YARN-5148: --- Thanks [~lewuathe] for the patch. New screen shot looks fine for me for metrics. Few comments on config and log pages. 1. When i visited “Tools” page, there were no default selection for sub tabs.. I think “YARN Configuration” could be made as a default selection 2. I think we can show same *jmx/logs* within same UI template. Even though we do not need to render cleanly, its better to show within new YARN UI. Currently its getting redirected to another page and we have click back button from browser to come back. This may not be a better idea. Code review: 1. {{adapters/yarn-conf.js}} could extend from {{adapters/abstract.js}} 2. *jmx/logs* could be rendered with plain txt adaptor (similar to yarn-container-logs) > [YARN-3368] Add page to new YARN UI to view server side > configurations/logs/JVM-metrics > --- > > Key: YARN-5148 > URL: https://issues.apache.org/jira/browse/YARN-5148 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Kai Sasaki > Attachments: Screen Shot 2016-09-11 at 23.28.31.png, Screen Shot > 2016-09-13 at 22.27.00.png, YARN-5148-YARN-3368.01.patch, > YARN-5148-YARN-3368.02.patch, YARN-5148-YARN-3368.03.patch, yarn-conf.png, > yarn-tools.png > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5659) getPathFromYarnURL should use standard methods
[ https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15548186#comment-15548186 ] Junping Du commented on YARN-5659: -- Latest patch LGTM. +1. Will commit it shortly if no further comments from others. > getPathFromYarnURL should use standard methods > -- > > Key: YARN-5659 > URL: https://issues.apache.org/jira/browse/YARN-5659 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: YARN-5659.01.patch, YARN-5659.02.patch, > YARN-5659.03.patch, YARN-5659.04.patch, YARN-5659.04.patch, > YARN-5659.05.patch, YARN-5659.05.patch, YARN-5659.patch > > > getPathFromYarnURL does some string shenanigans where standard ctors should > suffice. > There are also bugs in it e.g. passing an empty scheme to the URI ctor is > invalid, null should be used. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5707) Add manager class for resource profiles
[ https://issues.apache.org/jira/browse/YARN-5707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15548069#comment-15548069 ] Hadoop QA commented on YARN-5707: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 7 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 14m 26s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 44s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 25s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 46s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 46s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 7s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s {color} | {color:green} YARN-3926 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 27s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 39s {color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 9 new + 214 unchanged - 1 fixed = 223 total (was 215) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s {color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 25s {color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m 33s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 89m 59s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.resource.TestResourceProfiles | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831688/YARN-5707-YARN-3926.003.patch | | JIRA Issue | YARN-5707 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux a9f69d975b2f 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-3926 / 0bc6696 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle |
[jira] [Commented] (YARN-5698) [YARN-3368] Launch new YARN UI under hadoop web app port
[ https://issues.apache.org/jira/browse/YARN-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15548046#comment-15548046 ] Hadoop QA commented on YARN-5698: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 3m 23s {color} | {color:red} Docker failed to build yetus/hadoop:88ca7e4. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831700/YARN-5698-YARN-3368.0002.patch | | JIRA Issue | YARN-5698 | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/13290/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > [YARN-3368] Launch new YARN UI under hadoop web app port > > > Key: YARN-5698 > URL: https://issues.apache.org/jira/browse/YARN-5698 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-5698-YARN-3368.0001.patch, > YARN-5698-YARN-3368.0002.patch > > > As discussed in YARN-5145, it will be better to launch new web ui as a new > webapp under same old port. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5698) [YARN-3368] Launch new YARN UI under hadoop web app port
[ https://issues.apache.org/jira/browse/YARN-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-5698: -- Attachment: YARN-5698-YARN-3368.0002.patch Thanks [~leftnoteasy] for the comments. bq.When RM is runs on the local host, we still need to run corsproxy. Do you know what can we do to fix it? I have enabled cors-proxy headers in RM as per YARN-4009. With this support, I can launch and view new YARN web ui w/o corsproxy. core-site.xml {noformat} Enable/disable the cross-origin (CORS) filter. hadoop.http.cross-origin.enabled false Comma separated list of origins that are allowed for web services needing cross-origin (CORS) support. Wildcards (*) and patterns allowed hadoop.http.cross-origin.allowed-origins * Comma separated list of methods that are allowed for web services needing cross-origin (CORS) support. hadoop.http.cross-origin.allowed-methods GET,POST,HEAD Comma separated list of headers that are allowed for web services needing cross-origin (CORS) support. hadoop.http.cross-origin.allowed-headers X-Requested-With,Content-Type,Accept,Origin The number of seconds a pre-flighted request can be cached for web services needing cross-origin (CORS) support. hadoop.http.cross-origin.max-age 1800 {noformat} yarn-site.xml {noformat} Flag to enable cross-origin (CORS) support in the RM. This flag requires the CORS filter initializer to be added to the filter initializers list in core-site.xml. yarn.resourcemanager.webapp.cross-origin.enabled true Flag to enable cross-origin (CORS) support in the NM. This flag requires the CORS filter initializer to be added to the filter initializers list in core-site.xml. yarn.nodemanager.webapp.cross-origin.enabled true {noformat} bq.And if you have some bandwidth, could you test the patch on a distributed environment? I have tested this in 5-node cluster. Seems fine. > [YARN-3368] Launch new YARN UI under hadoop web app port > > > Key: YARN-5698 > URL: https://issues.apache.org/jira/browse/YARN-5698 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-5698-YARN-3368.0001.patch, > YARN-5698-YARN-3368.0002.patch > > > As discussed in YARN-5145, it will be better to launch new web ui as a new > webapp under same old port. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints
[ https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547980#comment-15547980 ] Varun Saxena commented on YARN-5585: By the way thinking more over it, we cannot predict how Separator class would change in future. The code I suggested above is not really tied to anything. So we can either adopt the approach above and mandatorily add a test case to ensure stop row does not end with 0xFF. Or we can adopt what you have done as well i.e. copy relevant code from HBase but if we do it we should probably have this code in Utils class instead of GenericEntityReader. > [Atsv2] Add a new filter fromId in REST endpoints > - > > Key: YARN-5585 > URL: https://issues.apache.org/jira/browse/YARN-5585 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S >Priority: Critical > Attachments: 0001-YARN-5585.patch, YARN-5585-workaround.patch, > YARN-5585.v0.patch > > > TimelineReader REST API's provides lot of filters to retrieve the > applications. Along with those, it would be good to add new filter i.e fromId > so that entities can be retrieved after the fromId. > Current Behavior : Default limit is set to 100. If there are 1000 entities > then REST call gives first/last 100 entities. How to retrieve next set of 100 > entities i.e 101 to 200 OR 900 to 801? > Example : If applications are stored database, app-1 app-2 ... app-10. > *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is > no way to achieve this. > So proposal is to have fromId in the filter like > *getApps?limit=5&=app-5* which gives list of apps from app-6 to > app-10. > Since ATS is targeting large number of entities storage, it is very common > use case to get next set of entities using fromId rather than querying all > the entites. This is very useful for pagination in web UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5554) MoveApplicationAcrossQueues does not check user permission on the target queue
[ https://issues.apache.org/jira/browse/YARN-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547911#comment-15547911 ] Hadoop QA commented on YARN-5554: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 15s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s {color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 77 unchanged - 3 fixed = 77 total (was 80) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 47m 27s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 64m 35s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831683/YARN-5554.9.patch | | JIRA Issue | YARN-5554 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux b2ca74d02a86 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 31f8da2 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/13287/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/13287/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > MoveApplicationAcrossQueues does not check user permission on the target queue > -- > > Key: YARN-5554 > URL: https://issues.apache.org/jira/browse/YARN-5554 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects
[jira] [Commented] (YARN-4911) Bad placement policy in FairScheduler causes the RM to crash
[ https://issues.apache.org/jira/browse/YARN-4911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547906#comment-15547906 ] Hadoop QA commented on YARN-4911: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 0s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 58s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 268 unchanged - 0 fixed = 269 total (was 268) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 19s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 50m 9s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831682/YARN-4911.003.patch | | JIRA Issue | YARN-4911 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 5041b3732f55 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 31f8da2 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/13288/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/13288/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/13288/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Bad placement policy in FairScheduler causes the RM to crash > > > Key: YARN-4911 >
[jira] [Updated] (YARN-5707) Add manager class for resource profiles
[ https://issues.apache.org/jira/browse/YARN-5707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Vasudev updated YARN-5707: Attachment: YARN-5707-YARN-3926.003.patch Thanks for the reviews [~leftnoteasy] and [~asuresh]! Uploaded a new patch with the fixes - 1) Fixed the license warnings 2) bq. Think you can remove the default constructor ResourceProfilesManagerImpl() Fixed 3) bq. The profiles map can be final and initialized to ConcurrentHashMap.. that way, you might not need to synchronize all the methods. Fixed 4) bq. I feel getResourceProfiles() should not return null... this would place the burden of checking for null on the client. It would would be better if an empty map is sent. Fixed 5) bq. Rather than explicitly asking the manager to reload, maybe it would be nice to have a reloading thread that monitors a given file for changes. We do this in the KMSACLs class. This was an oversight on my part - the function is meant for testing only. I've annotated it correctly. Thanks for catching it. > Add manager class for resource profiles > --- > > Key: YARN-5707 > URL: https://issues.apache.org/jira/browse/YARN-5707 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Attachments: YARN-5707-YARN-3926.001.patch, > YARN-5707-YARN-3926.002.patch, YARN-5707-YARN-3926.003.patch > > > Add a class that manages the resource profiles that are available for > applications to use. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5101) YARN_APPLICATION_UPDATED event is parsed in ApplicationHistoryManagerOnTimelineStore#convertToApplicationReport with reversed order
[ https://issues.apache.org/jira/browse/YARN-5101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547806#comment-15547806 ] Rohith Sharma K S commented on YARN-5101: - the patch looks good to me.. I will wait for a day, [~xgong] would like to look at the patch? > YARN_APPLICATION_UPDATED event is parsed in > ApplicationHistoryManagerOnTimelineStore#convertToApplicationReport with > reversed order > --- > > Key: YARN-5101 > URL: https://issues.apache.org/jira/browse/YARN-5101 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Xuan Gong >Assignee: Sunil G > Attachments: YARN-5101.0001.patch, YARN-5101.0002.patch > > > Right now, the application events are parsed in in > ApplicationHistoryManagerOnTimelineStore#convertToApplicationReport with > timestamp descending order, which means the later events would be parsed > first, and the previous same type of events would override the information. In > https://issues.apache.org/jira/browse/YARN-4044, we have introduced > YARN_APPLICATION_UPDATED events which might be submitted by RM multiple times > in one application life cycle. This could cause problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org