[jira] [Assigned] (YARN-5715) introduce entity prefix for return and sort order
[ https://issues.apache.org/jira/browse/YARN-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S reassigned YARN-5715: --- Assignee: Rohith Sharma K S > introduce entity prefix for return and sort order > - > > Key: YARN-5715 > URL: https://issues.apache.org/jira/browse/YARN-5715 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Sangjin Lee >Assignee: Rohith Sharma K S >Priority: Critical > Attachments: YARN-5715-YARN-5355.01.patch > > > While looking into YARN-5585, we have come across the need to provide a sort > order different than the current entity id order. The current entity id order > returns entities strictly in the lexicographical order, and as such it > returns the earliest entities first. This may not be the most natural return > order. A more natural return/sort order would be from the most recent > entities. > To solve this, we would like to add what we call the "entity prefix" in the > row key for the entity table. It is a number (long) that can be easily > provided by the client on write. In the row key, it would be added before the > entity id itself. > The entity prefix would be considered mandatory. On all writes (including > updates) the correct entity prefix should be set by the client so that the > correct row key is used. The entity prefix needs to be unique only within the > scope of the application and the entity type. > For queries that return a list of entities, the prefix values will be > returned along with the entity id's. Queries that specify the prefix and the > id should be returned quickly using the row key. If the query omits the > prefix but specifies the id (query by id), the query may be less efficient. > This JIRA should add the entity prefix to the entity API and add its handling > to the schema and the write path. The read path will be addressed in > YARN-5585. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5715) introduce entity prefix for return and sort order
[ https://issues.apache.org/jira/browse/YARN-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15553462#comment-15553462 ] Sangjin Lee commented on YARN-5715: --- During the call, I was thinking we might want to have a separate JIRA to deal with the YARN-generic entities, distributed shell, and MR, but on second thought I think it might be easier to do it in one patch. Let's see how it goes... > introduce entity prefix for return and sort order > - > > Key: YARN-5715 > URL: https://issues.apache.org/jira/browse/YARN-5715 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Sangjin Lee >Priority: Critical > Attachments: YARN-5715-YARN-5355.01.patch > > > While looking into YARN-5585, we have come across the need to provide a sort > order different than the current entity id order. The current entity id order > returns entities strictly in the lexicographical order, and as such it > returns the earliest entities first. This may not be the most natural return > order. A more natural return/sort order would be from the most recent > entities. > To solve this, we would like to add what we call the "entity prefix" in the > row key for the entity table. It is a number (long) that can be easily > provided by the client on write. In the row key, it would be added before the > entity id itself. > The entity prefix would be considered mandatory. On all writes (including > updates) the correct entity prefix should be set by the client so that the > correct row key is used. The entity prefix needs to be unique only within the > scope of the application and the entity type. > For queries that return a list of entities, the prefix values will be > returned along with the entity id's. Queries that specify the prefix and the > id should be returned quickly using the row key. If the query omits the > prefix but specifies the id (query by id), the query may be less efficient. > This JIRA should add the entity prefix to the entity API and add its handling > to the schema and the write path. The read path will be addressed in > YARN-5585. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5715) introduce entity prefix for return and sort order
[ https://issues.apache.org/jira/browse/YARN-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated YARN-5715: -- Attachment: YARN-5715-YARN-5355.01.patch Moving over the existing patch by [~rohithsharma] from YARN-5585 as the starting point. > introduce entity prefix for return and sort order > - > > Key: YARN-5715 > URL: https://issues.apache.org/jira/browse/YARN-5715 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Sangjin Lee >Priority: Critical > Attachments: YARN-5715-YARN-5355.01.patch > > > While looking into YARN-5585, we have come across the need to provide a sort > order different than the current entity id order. The current entity id order > returns entities strictly in the lexicographical order, and as such it > returns the earliest entities first. This may not be the most natural return > order. A more natural return/sort order would be from the most recent > entities. > To solve this, we would like to add what we call the "entity prefix" in the > row key for the entity table. It is a number (long) that can be easily > provided by the client on write. In the row key, it would be added before the > entity id itself. > The entity prefix would be considered mandatory. On all writes (including > updates) the correct entity prefix should be set by the client so that the > correct row key is used. The entity prefix needs to be unique only within the > scope of the application and the entity type. > For queries that return a list of entities, the prefix values will be > returned along with the entity id's. Queries that specify the prefix and the > id should be returned quickly using the row key. If the query omits the > prefix but specifies the id (query by id), the query may be less efficient. > This JIRA should add the entity prefix to the entity API and add its handling > to the schema and the write path. The read path will be addressed in > YARN-5585. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5715) introduce entity prefix for return and sort order
Sangjin Lee created YARN-5715: - Summary: introduce entity prefix for return and sort order Key: YARN-5715 URL: https://issues.apache.org/jira/browse/YARN-5715 Project: Hadoop YARN Issue Type: Sub-task Components: timelineserver Reporter: Sangjin Lee Priority: Critical While looking into YARN-5585, we have come across the need to provide a sort order different than the current entity id order. The current entity id order returns entities strictly in the lexicographical order, and as such it returns the earliest entities first. This may not be the most natural return order. A more natural return/sort order would be from the most recent entities. To solve this, we would like to add what we call the "entity prefix" in the row key for the entity table. It is a number (long) that can be easily provided by the client on write. In the row key, it would be added before the entity id itself. The entity prefix would be considered mandatory. On all writes (including updates) the correct entity prefix should be set by the client so that the correct row key is used. The entity prefix needs to be unique only within the scope of the application and the entity type. For queries that return a list of entities, the prefix values will be returned along with the entity id's. Queries that specify the prefix and the id should be returned quickly using the row key. If the query omits the prefix but specifies the id (query by id), the query may be less efficient. This JIRA should add the entity prefix to the entity API and add its handling to the schema and the write path. The read path will be addressed in YARN-5585. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy
[ https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15553303#comment-15553303 ] Eric Payne commented on YARN-2009: -- Thanks, [~sunilg], for your reply. - - {{FifoIntraQueuePreemptionPlugin#calculateIdealAssignedResourcePerApp}} -- The assignment to {{tmpApp.idealAssigned}} should be cloned: {code} tmpApp.idealAssigned = Resources.min(rc, clusterResource, queueTotalUnassigned, appIdealAssigned); ... Resources.subtractFrom(queueTotalUnassigned, tmpApp.idealAssigned); {code} -- In the above code, if {{queueTotalUnassigned}} is less than {{appIdealAssigned}}, then {{tmpApp.idealAssigned}} is assigned a reference to {{queueTotalUnassigned}}. Then, later, {{tmpApp.idealAssigned}} is actually subtracted from itself. - bq. This current patch will still handle priority and priority + user-limit. Thoughts? I am not comfortable with fixing this in another patch. Our main use case is the one where multiple users need to use the same queue with apps at the same priority. - I still need to think through all of the effects, but I was thinking that something like the following could work: -- I think my use case is failing because {{FifoIntraQueuePreemptionPlugin#calculateIdealAssignedResourcePerApp}} orders the apps by priority. I think that instead, it should order the apps by how much they are underserved. I think that it should be ordering the apps by {{tmpApp.toBePreemptedByOther}} instead of priority. -- Then, if {{calculateIdealAssignedResourcePerApp}} orders the apps by {{toBePreemptedByOther}}, {{validateOutSameAppPriorityFromDemand}} would also need to not compare priorities but the app's requirements. -- I think it should be something like the following, maybe: {code} while (lowAppIndex < highAppIndex && !apps[lowAppIndex].equals(apps[highAppIndex]) //&& apps[lowAppIndex].getPriority() < apps[highAppIndex].getPriority()) { && Resources.lessThan(rc, clusterResource, apps[lowAppIndex].getToBePreemptFromOther(), apps[highAppIndex].getToBePreemptFromOther()) ) { {code} > Priority support for preemption in ProportionalCapacityPreemptionPolicy > --- > > Key: YARN-2009 > URL: https://issues.apache.org/jira/browse/YARN-2009 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler >Reporter: Devaraj K >Assignee: Sunil G > Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, > YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch > > > While preempting containers based on the queue ideal assignment, we may need > to consider preempting the low priority application containers first. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5714) ContainerExecutor does not order environment map
[ https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Remi Catherinot updated YARN-5714: -- Description: when dumping the launch container script, environment variables are dumped based on the order internally used by the map implementation (hash based). It does not take into consideration that some env varibales may refer each other, and so that some env variables must be declared before those referencing them. In my case, i ended up having LD_LIBRARY_PATH which was depending on HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a wrong value and so native libraries weren't loaded. jobs were running but not at their best efficiency. This is just a use case falling into that bug, but i'm sure others may happen as well. I already have a patch running in my production environment, i just estimate to 5 days for packaging the patch in the right fashion for JIRA + try my best to add tests. Note : the patch is not OS aware with a default empty implementation. I will only implement the unix version on a 1st release. I'm not used to windows env variables syntax so it will take me more time/research for it. was: when dumping the launch container script, environment variables are dumped based on the order internally used by the map implementation (hash based). It does not take into consideration that some env varibales may refer each other, and so that some env variables must be declared before those referencing them. In my case, i ended having LD_LIBRARY_PATH which was depending on HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a wrong value and so native libraries weren't loaded. jobs were running but not at their best. This is just a use case falling into that bug, but i'm sure others may happen as well. I already have a patch running in my production environment, i just estimate to 5 days for packing the path in the right fashion for JIRA + try my best to add tests. Note : the patch is not OS aware with a default empty implementation. I will only implement the unix version on a 1st release. I'm not used to windows env variables syntax so it will take me more time/research for it. > ContainerExecutor does not order environment map > > > Key: YARN-5714 > URL: https://issues.apache.org/jira/browse/YARN-5714 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 2.4.1, 2.5.2, 2.7.3, 2.6.4, 3.0.0-alpha1 > Environment: all (linux and windows alike) >Reporter: Remi Catherinot >Priority: Trivial > Original Estimate: 120h > Remaining Estimate: 120h > > when dumping the launch container script, environment variables are dumped > based on the order internally used by the map implementation (hash based). It > does not take into consideration that some env varibales may refer each > other, and so that some env variables must be declared before those > referencing them. > In my case, i ended up having LD_LIBRARY_PATH which was depending on > HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a > wrong value and so native libraries weren't loaded. jobs were running but not > at their best efficiency. This is just a use case falling into that bug, but > i'm sure others may happen as well. > I already have a patch running in my production environment, i just estimate > to 5 days for packaging the patch in the right fashion for JIRA + try my best > to add tests. > Note : the patch is not OS aware with a default empty implementation. I will > only implement the unix version on a 1st release. I'm not used to windows env > variables syntax so it will take me more time/research for it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5714) ContainerExecutor does not order environment map
Remi Catherinot created YARN-5714: - Summary: ContainerExecutor does not order environment map Key: YARN-5714 URL: https://issues.apache.org/jira/browse/YARN-5714 Project: Hadoop YARN Issue Type: Bug Components: nodemanager Affects Versions: 3.0.0-alpha1, 2.6.4, 2.7.3, 2.5.2, 2.4.1 Environment: all (linux and windows alike) Reporter: Remi Catherinot Priority: Trivial when dumping the launch container script, environment variables are dumped based on the order internally used by the map implementation (hash based). It does not take into consideration that some env varibales may refer each other, and so that some env variables must be declared before those referencing them. In my case, i ended having LD_LIBRARY_PATH which was depending on HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a wrong value and so native libraries weren't loaded. jobs were running but not at their best. This is just a use case falling into that bug, but i'm sure others may happen as well. I already have a patch running in my production environment, i just estimate to 5 days for packing the path in the right fashion for JIRA + try my best to add tests. Note : the patch is not OS aware with a default empty implementation. I will only implement the unix version on a 1st release. I'm not used to windows env variables syntax so it will take me more time/research for it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5677) RM can be in active-active state for an extended period
[ https://issues.apache.org/jira/browse/YARN-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15553123#comment-15553123 ] Hadoop QA commented on YARN-5677: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 39s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 31s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 52m 41s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12832021/YARN-5677.005.patch | | JIRA Issue | YARN-5677 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 44b22861ef6e 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 72a2ae6 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/13312/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/13312/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > RM can be in active-active state for an extended period > --- > > Key: YARN-5677 > URL: https://issues.apache.org/jira/browse/YARN-5677 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 3.0.0-alpha1 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Attachments: YARN-5677.001.patch
[jira] [Commented] (YARN-5556) Support for deleting queues without requiring a RM restart
[ https://issues.apache.org/jira/browse/YARN-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552992#comment-15552992 ] Naganarasimha G R commented on YARN-5556: - Triggered the build manually not sure why it dint trigger automatically, may be [~xgong] you too could take a look if you require it ? Also shall i raise jira's and start working for other issues which we discussed ? > Support for deleting queues without requiring a RM restart > -- > > Key: YARN-5556 > URL: https://issues.apache.org/jira/browse/YARN-5556 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Reporter: Xuan Gong >Assignee: Naganarasimha G R > Attachments: YARN-5556.v1.001.patch, YARN-5556.v1.002.patch > > > Today, we could add or modify queues without restarting the RM, via a CS > refresh. But for deleting queue, we have to restart the ResourceManager. We > could support for deleting queues without requiring a RM restart -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5677) RM can be in active-active state for an extended period
[ https://issues.apache.org/jira/browse/YARN-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated YARN-5677: --- Attachment: YARN-5677.005.patch And here's a quick update to address the checkstyle complaint. > RM can be in active-active state for an extended period > --- > > Key: YARN-5677 > URL: https://issues.apache.org/jira/browse/YARN-5677 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 3.0.0-alpha1 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Attachments: YARN-5677.001.patch, YARN-5677.002.patch, > YARN-5677.003.patch, YARN-5677.004.patch, YARN-5677.005.patch > > > In trunk, there is no maximum number of retries that I see. It appears the > connection will be retried forever, with the active never figuring out it's > no longer active. In my testing, the active-active state lasted almost 2 > hours with no sign of stopping before I killed it. The solution appears to > be to cap the number of retries or amount of time spent retrying. > This issue is significant because of the asynchronous nature of job > submission. If the active doesn't know it's not active, it will buffer up > job submissions until it finally realizes it has become the standby. Then it > will fail all the job submissions in bulk. In high-volume workflows, that > behavior can create huge mass job failures. > This issue is also important because the node managers will not fail over to > the new active until the old active realizes it's the standby. Workloads > submitted after the old active loses contact with ZK will therefore fail to > be executed regardless of which RM the clients contact. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-4061) [Fault tolerance] Fault tolerant writer for timeline v2
[ https://issues.apache.org/jira/browse/YARN-4061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joep Rottinghuis reassigned YARN-4061: -- Assignee: Joep Rottinghuis (was: Li Lu) > [Fault tolerance] Fault tolerant writer for timeline v2 > --- > > Key: YARN-4061 > URL: https://issues.apache.org/jira/browse/YARN-4061 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Li Lu >Assignee: Joep Rottinghuis > Labels: YARN-5355 > Attachments: FaulttolerantwriterforTimelinev2.pdf > > > We need to build a timeline writer that can be resistant to backend storage > down time and timeline collector failures. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy
[ https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552696#comment-15552696 ] Sunil G commented on YARN-2009: --- HI [~eepayne] Thanks for sharing the details of the usecase .I have checked this problem and I know why that scenario is not working. {{FifoIntraQueuePreemptionPlugin.validateOutSameAppPriorityFromDemand}} is added to ensure that we should not do preemption for demand from same priority level. This code is hitting and causing Zero preemption in your scenarios. I wanted to add a different condition for both scenarios (user-limit alone AND user-limit + priority) but I would like to do that in another ticket. So it will be easier to track and test. This current patch will still handle priority and priority + user-limit. Thoughts? [~eepayne] and [~leftnoteasy] > Priority support for preemption in ProportionalCapacityPreemptionPolicy > --- > > Key: YARN-2009 > URL: https://issues.apache.org/jira/browse/YARN-2009 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler >Reporter: Devaraj K >Assignee: Sunil G > Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, > YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch > > > While preempting containers based on the queue ideal assignment, we may need > to consider preempting the low priority application containers first. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-4998) Minor cleanup to UGI use in AdminService
[ https://issues.apache.org/jira/browse/YARN-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated YARN-4998: --- Attachment: YARN-4998.002.patch Rebased > Minor cleanup to UGI use in AdminService > > > Key: YARN-4998 > URL: https://issues.apache.org/jira/browse/YARN-4998 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Affects Versions: 2.8.0 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Trivial > Attachments: YARN-4998.001.patch, YARN-4998.002.patch > > > Instead of calling {{UserGroupInformation.getCurrentUser()}} over and over, > we should just use the stored {{daemonUser}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5677) RM can be in active-active state for an extended period
[ https://issues.apache.org/jira/browse/YARN-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552676#comment-15552676 ] Hadoop QA commented on YARN-5677: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 40s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 56s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 8 unchanged - 0 fixed = 9 total (was 8) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 50s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 53m 4s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831989/YARN-5677.004.patch | | JIRA Issue | YARN-5677 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 248fd8aec8ba 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2cc841f | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/13310/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/13310/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/13310/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > RM can be in active-active state for an extended period > --- > > Key: YARN-5677 > URL: https:
[jira] [Comment Edited] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints
[ https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552593#comment-15552593 ] Varun Saxena edited comment on YARN-5585 at 10/6/16 5:30 PM: - bq. Given entities are sorted in ascending order, at some extent latest fist order can be achieve by doing reverse scan. I had tried this for yarn-containers and works fine. Reverse scan would work fine but how do we decide which entity type would need it and which won't. By the way we need container IDs' in the reverse order too ? IIRC, in one of the calls Li mentioned lexicographic order should be fine for new Web UI. If required we can have special handling for YARN specific entities like App attempts and Containers, just like we have for apps. No matter what we do, it should be consistent across all entities. We can also have another query param to indicate reverse lexicographic order is required. bq. IIUC, AM can delegate collector address to any of its running containers to publish its own data. TimelineClient can not be restricted to only AM. True. In a secure setup, AM can even pass on the token. The point is we support talking to AM only. AM can then delegate its work to anyone. But the concern here was that prefix will have to be passed around by AM via a new protocol. So if application wants to support delegating work to other processes, it anyways needs to open new protocol. So I guess this concern is not specific to prefix. Correct ? However, would be useful if you can tell the use case of multiple JVMs'. Same DAGs' can be executed by different processes. This would help us understanding the use case and decide how best to support it. was (Author: varun_saxena): bq. Given entities are sorted in ascending order, at some extent latest fist order can be achieve by doing reverse scan. I had tried this for yarn-containers and works fine. Reverse scan would work fine but how do we decide which entity type would need it and which won't. By the way we need container IDs' in the reverse order too ? IIRC, in one of the calls Li mentioned lexicographic order should be fine for new Web UI. If required we can have special handling for YARN specific entities like App attempts and Containers, just like we have for apps. No matter what we do, it should be consistent across all entities. We can also have another query param to indicate reverse lexicographic order is required. bq. IIUC, AM can delegate collector address to any of its running containers to publish its own data. TimelineClient can not be restricted to only AM. True. In a secure setup, AM can even pass on the token. The point is we support talking to AM only. AM can then delegate its work to anyone. But the concern here was that prefix will have to be passed around by AM via a new protocol. So if application wants to support delegating work to other processes, it anyways needs to open new protocol. So I guess this concern is not specific to prefix. Correct ? However, would be useful if you can tell the use case of multiple JVMs'. Same DAGs' can be executed by different processes. This would help us thing > [Atsv2] Add a new filter fromId in REST endpoints > - > > Key: YARN-5585 > URL: https://issues.apache.org/jira/browse/YARN-5585 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S >Priority: Critical > Attachments: 0001-YARN-5585.patch, YARN-5585-workaround.patch, > YARN-5585.v0.patch > > > TimelineReader REST API's provides lot of filters to retrieve the > applications. Along with those, it would be good to add new filter i.e fromId > so that entities can be retrieved after the fromId. > Current Behavior : Default limit is set to 100. If there are 1000 entities > then REST call gives first/last 100 entities. How to retrieve next set of 100 > entities i.e 101 to 200 OR 900 to 801? > Example : If applications are stored database, app-1 app-2 ... app-10. > *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is > no way to achieve this. > So proposal is to have fromId in the filter like > *getApps?limit=5&&fromId=app-5* which gives list of apps from app-6 to > app-10. > Since ATS is targeting large number of entities storage, it is very common > use case to get next set of entities using fromId rather than querying all > the entites. This is very useful for pagination in web UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints
[ https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552593#comment-15552593 ] Varun Saxena commented on YARN-5585: bq. Given entities are sorted in ascending order, at some extent latest fist order can be achieve by doing reverse scan. I had tried this for yarn-containers and works fine. Reverse scan would work fine but how do we decide which entity type would need it and which won't. By the way we need container IDs' in the reverse order too ? IIRC, in one of the calls Li mentioned lexicographic order should be fine for new Web UI. If required we can have special handling for YARN specific entities like App attempts and Containers, just like we have for apps. No matter what we do, it should be consistent across all entities. We can also have another query param to indicate reverse lexicographic order is required. bq. IIUC, AM can delegate collector address to any of its running containers to publish its own data. TimelineClient can not be restricted to only AM. True. In a secure setup, AM can even pass on the token. The point is we support talking to AM only. AM can then delegate its work to anyone. But the concern here was that prefix will have to be passed around by AM via a new protocol. So if application wants to support delegating work to other processes, it anyways needs to open new protocol. So I guess this concern is not specific to prefix. Correct ? However, would be useful if you can tell the use case of multiple JVMs'. Same DAGs' can be executed by different processes. This would help us thing > [Atsv2] Add a new filter fromId in REST endpoints > - > > Key: YARN-5585 > URL: https://issues.apache.org/jira/browse/YARN-5585 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S >Priority: Critical > Attachments: 0001-YARN-5585.patch, YARN-5585-workaround.patch, > YARN-5585.v0.patch > > > TimelineReader REST API's provides lot of filters to retrieve the > applications. Along with those, it would be good to add new filter i.e fromId > so that entities can be retrieved after the fromId. > Current Behavior : Default limit is set to 100. If there are 1000 entities > then REST call gives first/last 100 entities. How to retrieve next set of 100 > entities i.e 101 to 200 OR 900 to 801? > Example : If applications are stored database, app-1 app-2 ... app-10. > *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is > no way to achieve this. > So proposal is to have fromId in the filter like > *getApps?limit=5&&fromId=app-5* which gives list of apps from app-6 to > app-10. > Since ATS is targeting large number of entities storage, it is very common > use case to get next set of entities using fromId rather than querying all > the entites. This is very useful for pagination in web UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4206) Add life time value in Application report and web UI
[ https://issues.apache.org/jira/browse/YARN-4206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552580#comment-15552580 ] Gour Saha commented on YARN-4206: - It will be very helpful from an end-user perspective to provide the *remaining-timeouts* of the application in the report. For example, _remaining-lifetime_ of an application will let the app-owner understand that he/she needs to extend the lifetime because the initial estimate of completing a task was not accurate. Note, timeouts update is going to supported as per YARN-5611. It might not be easy or straightforward to calculate the remaining-timeouts for all ApplicationTimeoutTypes (we now support multiple timeouts). However we should try to provide them for whichever ones we can. The reason I am adding this note is because, we should not discard the idea of providing the remaining-timeouts, just because remaining-timeout for one of the _ApplicationTimeoutType_ cannot be calculated or does not makes sense. > Add life time value in Application report and web UI > > > Key: YARN-4206 > URL: https://issues.apache.org/jira/browse/YARN-4206 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler >Reporter: nijel >Assignee: nijel > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5667) Move HBase backend code in ATS v2 into its separate module
[ https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552577#comment-15552577 ] Sangjin Lee commented on YARN-5667: --- It now builds cleanly. Thanks. I am still looking into the changes, but a couple of quick comments: (1) Could you please ensure the dependencies are as correct as they can be? Specifically run {{mvn dependency:analyze}} to see if there are used but undeclared dependencies or unused but declared dependencies and whether they need to be addressed. (2) In {{yarn-server-timelineservice-hbase-tests}}, I see a specific mention of the version {{3.0.0-alpha2-SNAPSHOT}} for the dependency. It is better to manage them in the main parent pom (I think {{hadoop-project/pom.xml}}) than doing it here. This makes it difficult to change versions across the entire hadoop repository. > Move HBase backend code in ATS v2 into its separate module > --- > > Key: YARN-5667 > URL: https://issues.apache.org/jira/browse/YARN-5667 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: New module structure.png, part1.yarn5667.prelim.patch, > part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, > part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, > pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, > pt4.yarn5667.001.patch, pt5.yarn5667.001.patch, pt6.yarn5667.001.patch > > > The HBase backend code currently lives along with the core ATS v2 code in > hadoop-yarn-server-timelineservice module. Because Resource Manager depends > on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM > module on HBase modules is introduced (HBase backend is pluggable, so we do > not need to directly pull in HBase jars). > In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop > 3, we encountered a circular dependency during our builds between HBase2.0 > and Hadoop3 artifacts. > {code} > hadoop-mapreduce-client-common, hadoop-yarn-client, > hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, > hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, > hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common] > {code} > This jira proposes we move all HBase-backend-related code from > hadoop-yarn-server-timelineservice into its own module (possible name is > yarn-server-timelineservice-storage) so that core RM modules do not depend on > HBase modules any more. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints
[ https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552548#comment-15552548 ] Rohith Sharma K S commented on YARN-5585: - bq. Then the entity id order is really the earliest first. Is that what we intended? Given entities are sorted in ascending order, at some extent latest fist order can be achieve by doing reverse scan. I had tried this for yarn-containers and works fine. bq. It would be the client's responsibility to ensure correct data gets in Its not about entity data stored in but about number of extra rows get added in HBase. Say one time user is published entities with prefix and on next time user is published with different prefix or no-prefix for same entity. Since there is not validation from server end for each entity updates, unnecessary rows get added up for same entityId. bq. Also note that we expect the AM to be the sole client for a given YARN app. IIUC, AM can delegate collector address to any of its running containers to publish its own data. TimelineClient can not be restricted to *only AM*. > [Atsv2] Add a new filter fromId in REST endpoints > - > > Key: YARN-5585 > URL: https://issues.apache.org/jira/browse/YARN-5585 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S >Priority: Critical > Attachments: 0001-YARN-5585.patch, YARN-5585-workaround.patch, > YARN-5585.v0.patch > > > TimelineReader REST API's provides lot of filters to retrieve the > applications. Along with those, it would be good to add new filter i.e fromId > so that entities can be retrieved after the fromId. > Current Behavior : Default limit is set to 100. If there are 1000 entities > then REST call gives first/last 100 entities. How to retrieve next set of 100 > entities i.e 101 to 200 OR 900 to 801? > Example : If applications are stored database, app-1 app-2 ... app-10. > *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is > no way to achieve this. > So proposal is to have fromId in the filter like > *getApps?limit=5&&fromId=app-5* which gives list of apps from app-6 to > app-10. > Since ATS is targeting large number of entities storage, it is very common > use case to get next set of entities using fromId rather than querying all > the entites. This is very useful for pagination in web UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5677) RM can be in active-active state for an extended period
[ https://issues.apache.org/jira/browse/YARN-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated YARN-5677: --- Attachment: YARN-5677.004.patch Patch to address comments. > RM can be in active-active state for an extended period > --- > > Key: YARN-5677 > URL: https://issues.apache.org/jira/browse/YARN-5677 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 3.0.0-alpha1 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Attachments: YARN-5677.001.patch, YARN-5677.002.patch, > YARN-5677.003.patch, YARN-5677.004.patch > > > In trunk, there is no maximum number of retries that I see. It appears the > connection will be retried forever, with the active never figuring out it's > no longer active. In my testing, the active-active state lasted almost 2 > hours with no sign of stopping before I killed it. The solution appears to > be to cap the number of retries or amount of time spent retrying. > This issue is significant because of the asynchronous nature of job > submission. If the active doesn't know it's not active, it will buffer up > job submissions until it finally realizes it has become the standby. Then it > will fail all the job submissions in bulk. In high-volume workflows, that > behavior can create huge mass job failures. > This issue is also important because the node managers will not fail over to > the new active until the old active realizes it's the standby. Workloads > submitted after the old active loses contact with ZK will therefore fail to > be executed regardless of which RM the clients contact. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints
[ https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552438#comment-15552438 ] Sangjin Lee commented on YARN-5585: --- bq. Entity IDs' can be anything. Even a completely alphabetical sequence can be an entity ID. So it will not be possible to define a reverse order for every generic entity ID. Is this your question ? Yes, it was more of a realization on my part on how it behaves. For some reason, I thought that we would return the most recent entities first (i.e. reverse order of the entity id's). For example, if we had entity_0, entity_1, ..., entity_9, and queried with limit = 5, I had thought that we would return entity_5 through entity_9. Now I realize we would return entity_0 through entity_4 (that also explains some of Rohith's early comments). Then the entity id order is really the earliest first. Is that what we intended? I know "reversing" an arbitrary string is not easy, but I want to make sure if we're on the same page and if there is a way to accomplish the most recent entity order. bq. Secondly, what if users misses providing an prefixId in subsequent updates.? I agree with Varun on this. Even without the prefix, clients can set any value for entities, and the storage will store them per schema. It would be the client's responsibility to ensure correct data gets in. Also note that we expect the AM to be the sole client for a given YARN app. > [Atsv2] Add a new filter fromId in REST endpoints > - > > Key: YARN-5585 > URL: https://issues.apache.org/jira/browse/YARN-5585 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S >Priority: Critical > Attachments: 0001-YARN-5585.patch, YARN-5585-workaround.patch, > YARN-5585.v0.patch > > > TimelineReader REST API's provides lot of filters to retrieve the > applications. Along with those, it would be good to add new filter i.e fromId > so that entities can be retrieved after the fromId. > Current Behavior : Default limit is set to 100. If there are 1000 entities > then REST call gives first/last 100 entities. How to retrieve next set of 100 > entities i.e 101 to 200 OR 900 to 801? > Example : If applications are stored database, app-1 app-2 ... app-10. > *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is > no way to achieve this. > So proposal is to have fromId in the filter like > *getApps?limit=5&&fromId=app-5* which gives list of apps from app-6 to > app-10. > Since ATS is targeting large number of entities storage, it is very common > use case to get next set of entities using fromId rather than querying all > the entites. This is very useful for pagination in web UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5713) Update jackson from 1.9.13 to 2.x in hadoop-yarn
[ https://issues.apache.org/jira/browse/YARN-5713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552436#comment-15552436 ] Hadoop QA commented on YARN-5713: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 43s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 45s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 48s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 27s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s {color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 0 new + 131 unchanged - 1 fixed = 131 total (was 132) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 6s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s {color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s {color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s {color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s {color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s {color} | {color:green} hadoop-yarn-server-timeline-pluginstorage in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s {color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry generated 0 new + 48 unchanged - 4 fixed = 48 total (was 52) {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s {color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 15s {color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 3m 7s {color} | {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 43s {color} | {color:green} hadoop-yarn-server-timelineservice
[jira] [Commented] (YARN-5707) Add manager class for resource profiles
[ https://issues.apache.org/jira/browse/YARN-5707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552409#comment-15552409 ] Varun Vasudev commented on YARN-5707: - [~wangda] - is it ok if I go ahead and commit the patch? > Add manager class for resource profiles > --- > > Key: YARN-5707 > URL: https://issues.apache.org/jira/browse/YARN-5707 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Attachments: YARN-5707-YARN-3926.001.patch, > YARN-5707-YARN-3926.002.patch, YARN-5707-YARN-3926.003.patch, > YARN-5707-YARN-3926.004.patch > > > Add a class that manages the resource profiles that are available for > applications to use. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST
[ https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552387#comment-15552387 ] Hadoop QA commented on YARN-5561: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 59s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 34s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice: The patch generated 13 new + 19 unchanged - 0 fixed = 32 total (was 19) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 44s {color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 17m 14s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831977/0001-YARN-5561.YARN-5355.patch | | JIRA Issue | YARN-5561 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 4ee54e96c52d 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-5355 / 5d7ad39 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/13309/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/13309/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/13309/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and > entities via REST > --
[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST
[ https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552370#comment-15552370 ] Varun Saxena commented on YARN-5561: Thanks [~rohithsharma] for the patch. # We do not store configurations at container and app attempt level. So we can leave out query params associated with it, # Similarly no metrics are stored at app attempt level. Same goes for relationships. We can add them when we support them, if not for anything but just to reduce lines in code and documentation :) # Javadoc will have to be updated I think otherwise we will get a -1 in QA report. Yes this will unnecessarily increase code size but then there is no other way out. # Update tests for app attempts too ? > [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and > entities via REST > --- > > Key: YARN-5561 > URL: https://issues.apache.org/jira/browse/YARN-5561 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5561.YARN-5355.patch, YARN-5561.02.patch, > YARN-5561.patch, YARN-5561.v0.patch > > > ATSv2 model lacks retrieval of {{list-of-all-apps}}, > {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via > REST API's. And also it is required to know about all the entities in an > applications. > It is pretty much highly required these URLs for Web UI. > New REST URL would be > # GET {{/ws/v2/timeline/apps}} > # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}. > # GET > {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}} > # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of > entities that can be queried. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552342#comment-15552342 ] Varun Saxena commented on YARN-5699: For tracking URL part though, AHS will not be enabled with ATSv2 and new Web UI. > Retrospect yarn entity fields which are publishing in events info fields. > - > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, > 0002-YARN-5699.YARN-5355.patch > > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552332#comment-15552332 ] Varun Saxena commented on YARN-5699: Thanks [~rohithsharma] for the latest patch. Few comments: # In NMTimelinePublisher#publishContainerEvent, shouldn't we check against httpAddress being null ? Instead of not being null. # I see that we are publishing tracking URL twice. If AHS is enabled, tracking URL is changed to AHS web endpoint when attempt finishes. So info field will have its latest value as AHS url. In that case it should be attached to event also so that we know the original AM tracking URL too (i.e. when attempt was registered). Thoughts ? # In TimelineServiceV2Publisher#appStateUpdated, changes are not required as we are effectively doing the same thing as before. We should definitely publish this info at event level because event is app state update event. Also should we update it at info level too so that we can filter apps while they are running based on their current state ? # Do we need to publish master container info at app attempt level twice ? bq. Updated patch also fixes a another bug i.e always NM published http address port used to come as zero. Thanks for the fix. > Retrospect yarn entity fields which are publishing in events info fields. > - > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, > 0002-YARN-5699.YARN-5355.patch > > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Moved] (YARN-5713) Update jackson from 1.9.13 to 2.x in hadoop-yarn
[ https://issues.apache.org/jira/browse/YARN-5713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka moved HADOOP-13677 to YARN-5713: -- Target Version/s: 3.0.0-alpha2 (was: 3.0.0-alpha2) Component/s: (was: build) build Key: YARN-5713 (was: HADOOP-13677) Project: Hadoop YARN (was: Hadoop Common) > Update jackson from 1.9.13 to 2.x in hadoop-yarn > > > Key: YARN-5713 > URL: https://issues.apache.org/jira/browse/YARN-5713 > Project: Hadoop YARN > Issue Type: Improvement > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13677.01.patch, HADOOP-13677.02.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5101) YARN_APPLICATION_UPDATED event is parsed in ApplicationHistoryManagerOnTimelineStore#convertToApplicationReport with reversed order
[ https://issues.apache.org/jira/browse/YARN-5101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552265#comment-15552265 ] Hudson commented on YARN-5101: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10556 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10556/]) YARN-5101. YARN_APPLICATION_UPDATED event is parsed in (rohithsharmaks: rev 4d2f380d787a6145f45c87ba663079fedbf645b8) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java > YARN_APPLICATION_UPDATED event is parsed in > ApplicationHistoryManagerOnTimelineStore#convertToApplicationReport with > reversed order > --- > > Key: YARN-5101 > URL: https://issues.apache.org/jira/browse/YARN-5101 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Xuan Gong >Assignee: Sunil G > Fix For: 2.8.0 > > Attachments: YARN-5101.0001.patch, YARN-5101.0002.patch > > > Right now, the application events are parsed in in > ApplicationHistoryManagerOnTimelineStore#convertToApplicationReport with > timestamp descending order, which means the later events would be parsed > first, and the previous same type of events would override the information. In > https://issues.apache.org/jira/browse/YARN-4044, we have introduced > YARN_APPLICATION_UPDATED events which might be submitted by RM multiple times > in one application life cycle. This could cause problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST
[ https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-5561: Attachment: 0001-YARN-5561.YARN-5355.patch > [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and > entities via REST > --- > > Key: YARN-5561 > URL: https://issues.apache.org/jira/browse/YARN-5561 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5561.YARN-5355.patch, YARN-5561.02.patch, > YARN-5561.patch, YARN-5561.v0.patch > > > ATSv2 model lacks retrieval of {{list-of-all-apps}}, > {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via > REST API's. And also it is required to know about all the entities in an > applications. > It is pretty much highly required these URLs for Web UI. > New REST URL would be > # GET {{/ws/v2/timeline/apps}} > # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}. > # GET > {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}} > # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of > entities that can be queried. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552237#comment-15552237 ] Hadoop QA commented on YARN-5699: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 2s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 41s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 18s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s {color} | {color:green} YARN-5355 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s {color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 11s {color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 36m 41s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 74m 35s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831965/0002-YARN-5699.YARN-5355.patch | | JIRA Issue | YARN-5699 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 35d5a49f12ea 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-5355 / 5d7ad39 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/13307/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-y
[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552187#comment-15552187 ] Rohith Sharma K S commented on YARN-5699: - Updated patch also fixes a another bug i.e always NM published http address port used to come as zero. This is because of when NMPublisher started, web server was not started which resulting in http port used be zero. > Retrospect yarn entity fields which are publishing in events info fields. > - > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, > 0002-YARN-5699.YARN-5355.patch > > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints
[ https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552144#comment-15552144 ] Varun Saxena edited comment on YARN-5585 at 10/6/16 3:01 PM: - bq. I was thinking to use same REST API for both by using SingleColumnFilter. One cons I see is table scan for all the entityType i.e reflect in read performance. We should not use SingleColumnValueFilter if we know the prefix because as you said former will lead to a relatively slower read performance. Basically we need to differentiate between not having a prefix for the entity type and user unable to supply it. bq. I would have thought that we store the entities in the reverse entity id order, but it appears that the entity id is encoded into the row key as is (EntityRowKey). Am I reading that right? If so, this is a bug to fix. Entity IDs' can be anything. Even a completely alphabetical sequence can be an entity ID. So it will not be possible to define a reverse order for every generic entity ID. Is this your question ? bq. Firstly about multi JVM which makes application programmer to define new protocol for transferring prefixId. Trying to understand this more. Can same DAG be executed by multiple Tez AMs' ? bq. Secondly, what if users misses providing an prefixId in subsequent updates.? This should be caught during integration phase. Right ? was (Author: varun_saxena): bq. I was thinking to use same REST API for both by using SingleColumnFilter. One cons I see is table scan for all the entityType i.e reflect in read performance. We should not use SingleColumnValueFilter if we know the prefix because as you said former will lead to a relatively slower read performance. Basically we need to differentiate between having a prefix for the entity type and user unable to supply it. bq. I would have thought that we store the entities in the reverse entity id order, but it appears that the entity id is encoded into the row key as is (EntityRowKey). Am I reading that right? If so, this is a bug to fix. Entity IDs' can be anything. Even a completely alphabetical sequence can be an entity ID. So it will not be possible to define a reverse order for every generic entity ID. Is this your question ? bq. Firstly about multi JVM which makes application programmer to define new protocol for transferring prefixId. Trying to understand this more. Can same DAG be executed by multiple Tez AMs' ? bq. Secondly, what if users misses providing an prefixId in subsequent updates.? This should be caught during integration phase. Right ? > [Atsv2] Add a new filter fromId in REST endpoints > - > > Key: YARN-5585 > URL: https://issues.apache.org/jira/browse/YARN-5585 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S >Priority: Critical > Attachments: 0001-YARN-5585.patch, YARN-5585-workaround.patch, > YARN-5585.v0.patch > > > TimelineReader REST API's provides lot of filters to retrieve the > applications. Along with those, it would be good to add new filter i.e fromId > so that entities can be retrieved after the fromId. > Current Behavior : Default limit is set to 100. If there are 1000 entities > then REST call gives first/last 100 entities. How to retrieve next set of 100 > entities i.e 101 to 200 OR 900 to 801? > Example : If applications are stored database, app-1 app-2 ... app-10. > *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is > no way to achieve this. > So proposal is to have fromId in the filter like > *getApps?limit=5&&fromId=app-5* which gives list of apps from app-6 to > app-10. > Since ATS is targeting large number of entities storage, it is very common > use case to get next set of entities using fromId rather than querying all > the entites. This is very useful for pagination in web UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints
[ https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552144#comment-15552144 ] Varun Saxena commented on YARN-5585: bq. I was thinking to use same REST API for both by using SingleColumnFilter. One cons I see is table scan for all the entityType i.e reflect in read performance. We should not use SingleColumnValueFilter if we know the prefix because as you said former will lead to a relatively slower read performance. Basically we need to differentiate between having a prefix for the entity type and user unable to supply it. bq. I would have thought that we store the entities in the reverse entity id order, but it appears that the entity id is encoded into the row key as is (EntityRowKey). Am I reading that right? If so, this is a bug to fix. Entity IDs' can be anything. Even a completely alphabetical sequence can be an entity ID. So it will not be possible to define a reverse order for every generic entity ID. Is this your question ? bq. Firstly about multi JVM which makes application programmer to define new protocol for transferring prefixId. Trying to understand this more. Can same DAG be executed by multiple Tez AMs' ? bq. Secondly, what if users misses providing an prefixId in subsequent updates.? This should be caught during integration phase. Right ? > [Atsv2] Add a new filter fromId in REST endpoints > - > > Key: YARN-5585 > URL: https://issues.apache.org/jira/browse/YARN-5585 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S >Priority: Critical > Attachments: 0001-YARN-5585.patch, YARN-5585-workaround.patch, > YARN-5585.v0.patch > > > TimelineReader REST API's provides lot of filters to retrieve the > applications. Along with those, it would be good to add new filter i.e fromId > so that entities can be retrieved after the fromId. > Current Behavior : Default limit is set to 100. If there are 1000 entities > then REST call gives first/last 100 entities. How to retrieve next set of 100 > entities i.e 101 to 200 OR 900 to 801? > Example : If applications are stored database, app-1 app-2 ... app-10. > *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is > no way to achieve this. > So proposal is to have fromId in the filter like > *getApps?limit=5&&fromId=app-5* which gives list of apps from app-6 to > app-10. > Since ATS is targeting large number of entities storage, it is very common > use case to get next set of entities using fromId rather than querying all > the entites. This is very useful for pagination in web UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15552048#comment-15552048 ] Rohith Sharma K S commented on YARN-5699: - Updated patch by reverting changes made in NMTimelinePublisher. > Retrospect yarn entity fields which are publishing in events info fields. > - > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, > 0002-YARN-5699.YARN-5355.patch > > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-5699: Attachment: 0002-YARN-5699.YARN-5355.patch > Retrospect yarn entity fields which are publishing in events info fields. > - > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, > 0002-YARN-5699.YARN-5355.patch > > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5156) YARN_CONTAINER_FINISHED of YARN_CONTAINERs will always have running state
[ https://issues.apache.org/jira/browse/YARN-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551970#comment-15551970 ] Rohith Sharma K S commented on YARN-5156: - Alternatively eventfilter works fine. Parser need to presume that container state is completed if event YARN_CONTAINER_FINISHED exist. > YARN_CONTAINER_FINISHED of YARN_CONTAINERs will always have running state > - > > Key: YARN-5156 > URL: https://issues.apache.org/jira/browse/YARN-5156 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Li Lu >Assignee: Vrushali C > Labels: YARN-5355 > Fix For: YARN-5355 > > Attachments: YARN-5156-YARN-2928.01.patch, > YARN-5156-YARN-5355.01.patch, YARN-5156-YARN-5355.02.patch > > > On container finished, we're reporting "YARN_CONTAINER_STATE: "RUNNING"". Do > we design this deliberately or it's a bug? > {code} > { > metrics: [ ], > events: [ > { > id: "YARN_CONTAINER_FINISHED", > timestamp: 1464213765890, > info: { > YARN_CONTAINER_EXIT_STATUS: 0, > YARN_CONTAINER_STATE: "RUNNING", > YARN_CONTAINER_DIAGNOSTICS_INFO: "" > } > }, > { > id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED", > timestamp: 1464213761133, > info: { } > }, > { > id: "YARN_CONTAINER_CREATED", > timestamp: 1464213761132, > info: { } > }, > { > id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED", > timestamp: 1464213761132, > info: { } > } > ], > id: "container_e15_1464213707405_0001_01_18", > type: "YARN_CONTAINER", > createdtime: 1464213761132, > info: { > YARN_CONTAINER_ALLOCATED_PRIORITY: "20", > YARN_CONTAINER_ALLOCATED_VCORE: 1, > YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS: "10.22.16.164:0", > UID: > "yarn_cluster!application_1464213707405_0001!YARN_CONTAINER!container_e15_1464213707405_0001_01_18", > YARN_CONTAINER_ALLOCATED_HOST: "10.22.16.164", > YARN_CONTAINER_ALLOCATED_MEMORY: 1024, > SYSTEM_INFO_PARENT_ENTITY: { > type: "YARN_APPLICATION_ATTEMPT", > id: "appattempt_1464213707405_0001_01" > }, > YARN_CONTAINER_ALLOCATED_PORT: 64694 > }, > configs: { }, > isrelatedto: { }, > relatesto: { } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5571) [Atsv2] Query App REST endpoint need not to expose queryParams such userId/flowname/flowrunid
[ https://issues.apache.org/jira/browse/YARN-5571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naganarasimha G R updated YARN-5571: Summary: [Atsv2] Query App REST endpoint need not to expose queryParams such userId/flowname/flowrunid (was: [Atsv2] Query App REST endpoing need not to expose queryParams such userId/flowname/flowrunid) > [Atsv2] Query App REST endpoint need not to expose queryParams such > userId/flowname/flowrunid > - > > Key: YARN-5571 > URL: https://issues.apache.org/jira/browse/YARN-5571 > Project: Hadoop YARN > Issue Type: Bug > Components: timelinereader >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > > Timeline reader provides REST end point for querying app with an URL {{GET > /ws/v2/timeline/apps/\{app id\}}} along with queryParam as an filter. But > queryParam such as {{userId/flowname/flowrunid}} not at useful for querying > an app with app-id. In YARN cluster, only one app-id will be exist though out > life time. So userId/flowname/flowrunid not at all useful for the app-id REST > empoint > {noformat} > @GET > @Path("/apps/{appid}/") > @Produces(MediaType.APPLICATION_JSON) > public TimelineEntity getApp( > @Context HttpServletRequest req, > @Context HttpServletResponse res, > @PathParam("appid") String appId, > @QueryParam("flowname") String flowName, > @QueryParam("flowrunid") String flowRunId, > @QueryParam("userid") String userId, > @QueryParam("confstoretrieve") String confsToRetrieve, > @QueryParam("metricstoretrieve") String metricsToRetrieve, > @QueryParam("fields") String fields, > @QueryParam("metricslimit") String metricsLimit) { > return getApp(req, res, null, appId, flowName, flowRunId, userId, > confsToRetrieve, metricsToRetrieve, fields, metricsLimit); > } > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551926#comment-15551926 ] Hadoop QA commented on YARN-5699: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 48s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 45s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 46s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 17s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s {color} | {color:green} YARN-5355 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s {color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 10s {color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 37m 4s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 80m 1s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831943/0001-YARN-5699.YARN-5355.patch | | JIRA Issue | YARN-5699 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 3c5553fe091d 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-5355 / 5d7ad39 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/13306/artifact/pa
[jira] [Commented] (YARN-5156) YARN_CONTAINER_FINISHED of YARN_CONTAINERs will always have running state
[ https://issues.apache.org/jira/browse/YARN-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551901#comment-15551901 ] Varun Saxena commented on YARN-5156: [~rohithsharma], as per current code, container state can either be RUNNING or COMPLETED. And it can only be COMPLETED on container finished event. Can you use event filters and check for container finished event for your use case ? We went ahead with removing this container state because it can be easily deduced from the event itself. Anyways container state at that time was being carried at event level. If we want a more holistic view of container states we can publish all possible internal container states and this would require change in ContainerImpl. > YARN_CONTAINER_FINISHED of YARN_CONTAINERs will always have running state > - > > Key: YARN-5156 > URL: https://issues.apache.org/jira/browse/YARN-5156 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Li Lu >Assignee: Vrushali C > Labels: YARN-5355 > Fix For: YARN-5355 > > Attachments: YARN-5156-YARN-2928.01.patch, > YARN-5156-YARN-5355.01.patch, YARN-5156-YARN-5355.02.patch > > > On container finished, we're reporting "YARN_CONTAINER_STATE: "RUNNING"". Do > we design this deliberately or it's a bug? > {code} > { > metrics: [ ], > events: [ > { > id: "YARN_CONTAINER_FINISHED", > timestamp: 1464213765890, > info: { > YARN_CONTAINER_EXIT_STATUS: 0, > YARN_CONTAINER_STATE: "RUNNING", > YARN_CONTAINER_DIAGNOSTICS_INFO: "" > } > }, > { > id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED", > timestamp: 1464213761133, > info: { } > }, > { > id: "YARN_CONTAINER_CREATED", > timestamp: 1464213761132, > info: { } > }, > { > id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED", > timestamp: 1464213761132, > info: { } > } > ], > id: "container_e15_1464213707405_0001_01_18", > type: "YARN_CONTAINER", > createdtime: 1464213761132, > info: { > YARN_CONTAINER_ALLOCATED_PRIORITY: "20", > YARN_CONTAINER_ALLOCATED_VCORE: 1, > YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS: "10.22.16.164:0", > UID: > "yarn_cluster!application_1464213707405_0001!YARN_CONTAINER!container_e15_1464213707405_0001_01_18", > YARN_CONTAINER_ALLOCATED_HOST: "10.22.16.164", > YARN_CONTAINER_ALLOCATED_MEMORY: 1024, > SYSTEM_INFO_PARENT_ENTITY: { > type: "YARN_APPLICATION_ATTEMPT", > id: "appattempt_1464213707405_0001_01" > }, > YARN_CONTAINER_ALLOCATED_PORT: 64694 > }, > configs: { }, > isrelatedto: { }, > relatesto: { } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551908#comment-15551908 ] Varun Saxena commented on YARN-5699: bq. Otherwise we miss the this information else container_state info should be published from NM So the reason this was removed in YARN-5156 was that container state can either be RUNNING and COMPLETED and it becomes COMPLETED only on CONTAINER_FINISHED event. Which means it can be easily deduced from event. At the time container state was carried in event info so the majority opinion went with removing container state altogether. I have made a comment on YARN-5156 too. > Retrospect yarn entity fields which are publishing in events info fields. > - > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch > > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5156) YARN_CONTAINER_FINISHED of YARN_CONTAINERs will always have running state
[ https://issues.apache.org/jira/browse/YARN-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551837#comment-15551837 ] Rohith Sharma K S commented on YARN-5156: - After YARN-4129, publishing container info from RM is disabled by default. So, the info YARN_CONTAINER_STATE will be not published by NM which is more important to retrieve COMPLETED container using filters. I just noticed this while discussing YARN-5699. cc :/ [~naganarasimha...@apache.org] [~varun_saxena] [~gtCarrera9] [~vrushalic] [~sjlee0] > YARN_CONTAINER_FINISHED of YARN_CONTAINERs will always have running state > - > > Key: YARN-5156 > URL: https://issues.apache.org/jira/browse/YARN-5156 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Li Lu >Assignee: Vrushali C > Labels: YARN-5355 > Fix For: YARN-5355 > > Attachments: YARN-5156-YARN-2928.01.patch, > YARN-5156-YARN-5355.01.patch, YARN-5156-YARN-5355.02.patch > > > On container finished, we're reporting "YARN_CONTAINER_STATE: "RUNNING"". Do > we design this deliberately or it's a bug? > {code} > { > metrics: [ ], > events: [ > { > id: "YARN_CONTAINER_FINISHED", > timestamp: 1464213765890, > info: { > YARN_CONTAINER_EXIT_STATUS: 0, > YARN_CONTAINER_STATE: "RUNNING", > YARN_CONTAINER_DIAGNOSTICS_INFO: "" > } > }, > { > id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED", > timestamp: 1464213761133, > info: { } > }, > { > id: "YARN_CONTAINER_CREATED", > timestamp: 1464213761132, > info: { } > }, > { > id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED", > timestamp: 1464213761132, > info: { } > } > ], > id: "container_e15_1464213707405_0001_01_18", > type: "YARN_CONTAINER", > createdtime: 1464213761132, > info: { > YARN_CONTAINER_ALLOCATED_PRIORITY: "20", > YARN_CONTAINER_ALLOCATED_VCORE: 1, > YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS: "10.22.16.164:0", > UID: > "yarn_cluster!application_1464213707405_0001!YARN_CONTAINER!container_e15_1464213707405_0001_01_18", > YARN_CONTAINER_ALLOCATED_HOST: "10.22.16.164", > YARN_CONTAINER_ALLOCATED_MEMORY: 1024, > SYSTEM_INFO_PARENT_ENTITY: { > type: "YARN_APPLICATION_ATTEMPT", > id: "appattempt_1464213707405_0001_01" > }, > YARN_CONTAINER_ALLOCATED_PORT: 64694 > }, > configs: { }, > isrelatedto: { }, > relatesto: { } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551812#comment-15551812 ] Naganarasimha G R commented on YARN-5699: - Had missed YARN-5156 let me check how to include that scenario too > Retrospect yarn entity fields which are publishing in events info fields. > - > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch > > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551803#comment-15551803 ] Naganarasimha G R commented on YARN-5699: - There was a detailed discussion on this and then we concluded it to be in NM only and yes all the required information is getting published from NM right ? > Retrospect yarn entity fields which are publishing in events info fields. > - > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch > > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551786#comment-15551786 ] Rohith Sharma K S commented on YARN-5699: - bq. Is this intentional ? yes, because the same information also published by RM. So I think we can keep at one place. {code} [ { "events": [ { "id": "YARN_RM_CONTAINER_FINISHED", "timestamp": 1475469588288, "info": { "YARN_CONTAINER_STATE": "COMPLETE", "YARN_CONTAINER_EXIT_STATUS": 0, "YARN_CONTAINER_DIAGNOSTICS_INFO": "" } }, { "id": "YARN_RM_CONTAINER_CREATED", "timestamp": 1475469570721, "info": { "YARN_CONTAINER_ALLOCATED_PORT": 25006, "YARN_CONTAINER_ALLOCATED_MEMORY": 1024, "YARN_CONTAINER_ALLOCATED_PRIORITY": 0, "YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS": "http://ctr-e29-1471411959733-0412-01-04.hwx.site:25008";, "YARN_CONTAINER_ALLOCATED_HOST": "ctr-e29-1471411959733-0412-01-04.hwx.site", "YARN_CONTAINER_ALLOCATED_VCORE": 1 } }, { "id": "YARN_CONTAINER_CREATED", "timestamp": -1, "info": {} }, { "id": "YARN_CONTAINER_FINISHED", "timestamp": -1, "info": { "YARN_CONTAINER_STATE": "RUNNING", "YARN_CONTAINER_EXIT_STATUS": 0, "YARN_CONTAINER_DIAGNOSTICS_INFO": "" } }, { "id": "YARN_NM_CONTAINER_LOCALIZATION_FINISHED", "timestamp": -1, "info": {} }, { "id": "YARN_NM_CONTAINER_LOCALIZATION_STARTED", "timestamp": -1, "info": {} } ], "type": "YARN_CONTAINER", "id": "container_e09_1475277121920_0010_01_01", "createdtime": -1, "info": { "YARN_CONTAINER_ALLOCATED_PORT": 25006, "UID": "yarn-cluster!application_1475277121920_0010!YARN_CONTAINER!container_e09_1475277121920_0010_01_01", "YARN_CONTAINER_ALLOCATED_MEMORY": 1024, "SYSTEM_INFO_PARENT_ENTITY": { "type": "YARN_APPLICATION_ATTEMPT", "id": "appattempt_1475277121920_0010_01" }, "YARN_CONTAINER_ALLOCATED_PRIORITY": "0", "YARN_CONTAINER_ALLOCATED_HOST": "ctr-e29-1471411959733-0412-01-04.hwx.site", "YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS": "ctr-e29-1471411959733-0412-01-04.hwx.site:0", "YARN_CONTAINER_ALLOCATED_VCORE": 1 }, "configs": {}, "isrelatedto": {}, "relatesto": {} } ] {code} bq. Container event publishing from RM is optional. I see. But I think we should make it mandatory to publish from RM since some information always get from RM like container_state. Otherwise we miss the this information else container_state info should be published from NM. YARN-5156 removes from NM publisher. > Retrospect yarn entity fields which are publishing in events info fields. > - > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch > > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551754#comment-15551754 ] Varun Saxena commented on YARN-5699: [~rohithsharma], thanks for the patch. In the patch I see you have removed container related info from NMTimelinePublisher. Is this intentional ? Container event publishing from RM is optional. And will be ideally be switched off due to the volume and impact on RM. > Retrospect yarn entity fields which are publishing in events info fields. > - > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch > > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Resolved] (YARN-5571) [Atsv2] Query App REST endpoing need not to expose queryParams such userId/flowname/flowrunid
[ https://issues.apache.org/jira/browse/YARN-5571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S resolved YARN-5571. - Resolution: Won't Fix As per the discussion, these params are for faster look up of entities. So, resolving as wont fix. Thanks Varun for clearing doubts!! > [Atsv2] Query App REST endpoing need not to expose queryParams such > userId/flowname/flowrunid > - > > Key: YARN-5571 > URL: https://issues.apache.org/jira/browse/YARN-5571 > Project: Hadoop YARN > Issue Type: Bug > Components: timelinereader >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > > Timeline reader provides REST end point for querying app with an URL {{GET > /ws/v2/timeline/apps/\{app id\}}} along with queryParam as an filter. But > queryParam such as {{userId/flowname/flowrunid}} not at useful for querying > an app with app-id. In YARN cluster, only one app-id will be exist though out > life time. So userId/flowname/flowrunid not at all useful for the app-id REST > empoint > {noformat} > @GET > @Path("/apps/{appid}/") > @Produces(MediaType.APPLICATION_JSON) > public TimelineEntity getApp( > @Context HttpServletRequest req, > @Context HttpServletResponse res, > @PathParam("appid") String appId, > @QueryParam("flowname") String flowName, > @QueryParam("flowrunid") String flowRunId, > @QueryParam("userid") String userId, > @QueryParam("confstoretrieve") String confsToRetrieve, > @QueryParam("metricstoretrieve") String metricsToRetrieve, > @QueryParam("fields") String fields, > @QueryParam("metricslimit") String metricsLimit) { > return getApp(req, res, null, appId, flowName, flowRunId, userId, > confsToRetrieve, metricsToRetrieve, fields, metricsLimit); > } > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-5699: Attachment: 0001-YARN-5699.YARN-5355.patch updating same patch to branch YARN-5355 > Retrospect yarn entity fields which are publishing in events info fields. > - > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch > > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551737#comment-15551737 ] Hadoop QA commented on YARN-5699: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s {color} | {color:red} YARN-5699 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831941/0001-YARN-5699.patch | | JIRA Issue | YARN-5699 | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/13305/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Retrospect yarn entity fields which are publishing in events info fields. > - > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5699.patch > > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-5699: Summary: Retrospect yarn entity fields which are publishing in events info fields. (was: Retrospect container entity fields which are publishing in events info fields.) > Retrospect yarn entity fields which are publishing in events info fields. > - > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5699.patch > > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5699) Retrospect container entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-5699: Attachment: 0001-YARN-5699.patch Updated the patch with following changes. Application : # YARN_APPLICATION_FINISHED event has some app level information, I have moved to entity level info. ApplicationAttempt : # Events YARN_APPLICATION_ATTEMPT_FINISHED and YARN_APPLICATION_ATTEMPT_REGISTERED has application report specific info details. These have been moved to entity info level. Container : # Events YARN_RM_CONTAINER_CREATED and YARN_RM_CONTAINER_FINISHED had container report specific info details. These have been moved to entity info level. # Remove duplicated information which was published from NM publisher info. # Added container created and container finished information in entity level info. > Retrospect container entity fields which are publishing in events info fields. > -- > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5699.patch > > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5556) Support for deleting queues without requiring a RM restart
[ https://issues.apache.org/jira/browse/YARN-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naganarasimha G R updated YARN-5556: Attachment: YARN-5556.v1.002.patch Hi [~xgong], i have updated the patch with delete for parent queue as discussed along with test cases covering the scenarios. > Support for deleting queues without requiring a RM restart > -- > > Key: YARN-5556 > URL: https://issues.apache.org/jira/browse/YARN-5556 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Reporter: Xuan Gong >Assignee: Naganarasimha G R > Attachments: YARN-5556.v1.001.patch, YARN-5556.v1.002.patch > > > Today, we could add or modify queues without restarting the RM, via a CS > refresh. But for deleting queue, we have to restart the ResourceManager. We > could support for deleting queues without requiring a RM restart -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5388) MAPREDUCE-6719 requires changes to DockerContainerExecutor
[ https://issues.apache.org/jira/browse/YARN-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551508#comment-15551508 ] Varun Vasudev commented on YARN-5388: - I'm in favour of removing DockerContainerExecutor but it should be done via the dev list. We should send out an email to the users and yarn dev lists asking if anyone has objections, etc. > MAPREDUCE-6719 requires changes to DockerContainerExecutor > -- > > Key: YARN-5388 > URL: https://issues.apache.org/jira/browse/YARN-5388 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Fix For: 2.9.0 > > Attachments: YARN-5388.001.patch, YARN-5388.002.patch, > YARN-5388.branch-2.001.patch, YARN-5388.branch-2.002.patch > > > Because the {{DockerContainerExecuter}} overrides the {{writeLaunchEnv()}} > method, it must also have the wildcard processing logic from > YARN-4958/YARN-5373 added to it. Without it, the use of -libjars will fail > unless wildcarding is disabled. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5025) Container move (relocation) between nodes
[ https://issues.apache.org/jira/browse/YARN-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551451#comment-15551451 ] ASF GitHub Bot commented on YARN-5025: -- GitHub user szape opened a pull request: https://github.com/apache/hadoop/pull/134 YARN-5025. Container move (relocation) between nodes Support for relocating containers has become a must-have requirement for most multi-service applications, since the inevitable concept-drifts make SLAs hard to be satisfied. The relocation and co-location of services (long running containers) can help to reduce bottlenecks in a multi-service cluster, especially where data-intensive, streaming applications interfere. You can merge this pull request into a Git repository by running: $ git pull https://github.com/szape/hadoop YARN-5025 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hadoop/pull/134.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #134 commit 5ada9ce8141265713d79f6c846d3aeb5d964fff5 Author: szape Date: 2016-09-15T09:48:19Z YARN-5025. Added container relocation logic to FifoScheduler commit ff1bdfd8e88e9f40db3109e9e3aa77d5264c4f3b Author: szape Date: 2016-09-16T10:15:17Z YARN-5025. Integrated container relocation feature into AMRMClient commit 260530caf17866a4fac3057406e4d5047080f4b4 Author: szape Date: 2016-09-19T09:50:55Z YARN-5025. Added and updated container management protocols for container relocation > Container move (relocation) between nodes > - > > Key: YARN-5025 > URL: https://issues.apache.org/jira/browse/YARN-5025 > Project: Hadoop YARN > Issue Type: New Feature > Components: nodemanager, resourcemanager, yarn >Reporter: Zoltán Zvara > Attachments: YARN-Container-Move-(Relocation)-Between-Nodes.pdf > > > Support for relocating containers has become a must-have requirement for most > multi-service applications, since the inevitable concept-drifts make SLAs > hard to be satisfied. The relocation and co-location of services (long > running containers) can help to reduce bottlenecks in a multi-service > cluster, especially where data-intensive, streaming applications interfere. > See the high-level implementation details in the attached design document. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5453) FairScheduler#update may skip update demand resource of child queue/app if current demand reached maxResource
[ https://issues.apache.org/jira/browse/YARN-5453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551422#comment-15551422 ] sandflee commented on YARN-5453: Hi, [~kasha], patch updated, failed test could run pass locally seems not related. > FairScheduler#update may skip update demand resource of child queue/app if > current demand reached maxResource > - > > Key: YARN-5453 > URL: https://issues.apache.org/jira/browse/YARN-5453 > Project: Hadoop YARN > Issue Type: Bug >Reporter: sandflee >Assignee: sandflee > Attachments: YARN-5453.01.patch, YARN-5453.02.patch, > YARN-5453.03.patch, YARN-5453.04.patch > > > {code} > demand = Resources.createResource(0); > for (FSQueue childQueue : childQueues) { > childQueue.updateDemand(); > Resource toAdd = childQueue.getDemand(); > demand = Resources.add(demand, toAdd); > demand = Resources.componentwiseMin(demand, maxRes); > if (Resources.equals(demand, maxRes)) { > break; > } > } > {code} > if one singe queue's demand resource exceed maxRes, the other queue's demand > resource will not update. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5699) Retrospect container entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551391#comment-15551391 ] Rohith Sharma K S commented on YARN-5699: - I think this is the time to decide should we support translation layer for YARN entities to get attempts-report, container-reports, application-report. It can be additional web service in ATS2 similar to */ws/v1/applicationhistory*. Major concern of this JIRA is about deserializing attempt-report and container-report for Web UI. If we get much easier translation layer from java, it would be much easier. Other wise, it is tedious task for deserializing entity payload if any change in the publishing entity. > Retrospect container entity fields which are publishing in events info fields. > -- > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5698) [YARN-3368] Launch new YARN UI under hadoop web app port
[ https://issues.apache.org/jira/browse/YARN-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551365#comment-15551365 ] Sunil G commented on YARN-5698: --- [~leftnoteasy], kindly help to check. > [YARN-3368] Launch new YARN UI under hadoop web app port > > > Key: YARN-5698 > URL: https://issues.apache.org/jira/browse/YARN-5698 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-5698-YARN-3368.0001.patch, > YARN-5698-YARN-3368.0002.patch, YARN-5698-YARN-3368.0003.patch > > > As discussed in YARN-5145, it will be better to launch new web ui as a new > webapp under same old port. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5101) YARN_APPLICATION_UPDATED event is parsed in ApplicationHistoryManagerOnTimelineStore#convertToApplicationReport with reversed order
[ https://issues.apache.org/jira/browse/YARN-5101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-5101: Target Version/s: 2.8.0 > YARN_APPLICATION_UPDATED event is parsed in > ApplicationHistoryManagerOnTimelineStore#convertToApplicationReport with > reversed order > --- > > Key: YARN-5101 > URL: https://issues.apache.org/jira/browse/YARN-5101 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Xuan Gong >Assignee: Sunil G > Attachments: YARN-5101.0001.patch, YARN-5101.0002.patch > > > Right now, the application events are parsed in in > ApplicationHistoryManagerOnTimelineStore#convertToApplicationReport with > timestamp descending order, which means the later events would be parsed > first, and the previous same type of events would override the information. In > https://issues.apache.org/jira/browse/YARN-4044, we have introduced > YARN_APPLICATION_UPDATED events which might be submitted by RM multiple times > in one application life cycle. This could cause problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5698) [YARN-3368] Launch new YARN UI under hadoop web app port
[ https://issues.apache.org/jira/browse/YARN-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551319#comment-15551319 ] Hadoop QA commented on YARN-5698: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 9s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 10s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 24s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 2s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 56s {color} | {color:green} YARN-3368 passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 26s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s {color} | {color:green} YARN-3368 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s {color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 19s {color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 19s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s {color} | {color:green} hadoop-yarn-ui in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 71m 35s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:88ca7e4 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831907/YARN-5698-YARN-3368.0003.p
[jira] [Commented] (YARN-5453) FairScheduler#update may skip update demand resource of child queue/app if current demand reached maxResource
[ https://issues.apache.org/jira/browse/YARN-5453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551307#comment-15551307 ] Hadoop QA commented on YARN-5453: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 11s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 46s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 54m 0s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMHA | | | hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831871/YARN-5453.04.patch | | JIRA Issue | YARN-5453 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux d8988084463a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 272a217 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/13304/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit test logs | https://builds.apache.org/job/PreCommit-YARN-Build/13304/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/13304/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output |
[jira] [Commented] (YARN-5704) Provide config knobs to control enabling/disabling new/work in progress features in container-executor
[ https://issues.apache.org/jira/browse/YARN-5704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551296#comment-15551296 ] Sidharta Seethana commented on YARN-5704: - [~aw] As you can see from the patch, most of the changes in the patch are in main.c . Changes to this file cannot be tested via test-container-executor, I believe ? > Provide config knobs to control enabling/disabling new/work in progress > features in container-executor > -- > > Key: YARN-5704 > URL: https://issues.apache.org/jira/browse/YARN-5704 > Project: Hadoop YARN > Issue Type: Task > Components: yarn >Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Sidharta Seethana >Assignee: Sidharta Seethana > Attachments: YARN-5704.001.patch > > > Provide a mechanism to enable/disable Docker and TC (Traffic Control) > functionality at the container-executor level. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy
[ https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551279#comment-15551279 ] Hadoop QA commented on YARN-2009: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 82 new + 179 unchanged - 29 fixed = 261 total (was 208) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 10s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 24s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m 6s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 54m 38s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | | org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.IntraQueueCandidatesSelector$TAPriorityComparator implements Comparator but not Serializable At IntraQueueCandidatesSelector.java:Serializable At IntraQueueCandidatesSelector.java:[lines 45-55] | | | org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.IntraQueueCandidatesSelector$TAReverseComparator implements Comparator but not Serializable At IntraQueueCandidatesSelector.java:Serializable At IntraQueueCandidatesSelector.java:[lines 59-69] | | | Unread field:TempAppPerPartition.java:[line 60] | | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling | | | hadoop.yarn.server.resourcemanager.monitor.capacity.TestProportionalCapacityPreemptionPolicy | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831577/YARN-2009.0005.patch | | JIRA Issue | YARN-2009 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 882d9c86c04c 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 201
[jira] [Updated] (YARN-5698) [YARN-3368] Launch new YARN UI under hadoop web app port
[ https://issues.apache.org/jira/browse/YARN-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-5698: -- Attachment: YARN-5698-YARN-3368.0003.patch Updating new patch after fixing test case. > [YARN-3368] Launch new YARN UI under hadoop web app port > > > Key: YARN-5698 > URL: https://issues.apache.org/jira/browse/YARN-5698 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-5698-YARN-3368.0001.patch, > YARN-5698-YARN-3368.0002.patch, YARN-5698-YARN-3368.0003.patch > > > As discussed in YARN-5145, it will be better to launch new web ui as a new > webapp under same old port. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org