[jira] [Created] (YARN-7925) Some NPE errors caused a display errors when setting node labels
Jinjiang Ling created YARN-7925: --- Summary: Some NPE errors caused a display errors when setting node labels Key: YARN-7925 URL: https://issues.apache.org/jira/browse/YARN-7925 Project: Hadoop YARN Issue Type: Bug Affects Versions: 3.1.0 Reporter: Jinjiang Ling Assignee: Jinjiang Ling I'm trying to using the node label with latest hadoop (3.1.0-SNAPSHOT). But when I add a new node label and append a nodemanager to it, sometimes it may cause a display error. !image-2018-02-13-07-57-39-128.png! Then I found *when there is no queues can access to the label*, this error will happen. After checking the log, I find some NPE errors. {quote} .. Caused by: java.lang.NullPointerException at org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo.toString(ResourceInfo.java:73) at org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:160) .. {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-7292) Revisit Resource Profile Behavior
[ https://issues.apache.org/jira/browse/YARN-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16361948#comment-16361948 ] Sunil G edited comment on YARN-7292 at 2/13/18 8:00 AM: Thanks [~leftnoteasy]. I am not very sure about that option "do-not-override" is needed, as we have some cleaner option like zero capability from utils (or from server). Other than the patch looks fine to me +1. was (Author: sunilg): Thanks [~leftnoteasy]. I am not very sure about that option "do-not-override" is needed, as we have some cleaner option like zero capability from utils (or from server). > Revisit Resource Profile Behavior > - > > Key: YARN-7292 > URL: https://issues.apache.org/jira/browse/YARN-7292 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Blocker > Attachments: YARN-7292.002.patch, YARN-7292.003.patch, > YARN-7292.004.patch, YARN-7292.005.patch, YARN-7292.006.patch, > YARN-7292.wip.001.patch > > > Had discussions with [~templedf], [~vvasudev], [~sunilg] offline. There're a > couple of resource profile related behaviors might need to be updated: > 1) Configure resource profile in server side or client side: > Currently resource profile can be only configured centrally: > - Advantages: > A given resource profile has a the same meaning in the cluster. It won’t > change when we run different apps in different configurations. A job can run > under Amazon’s G2.8X can also run on YARN with G2.8X profile. A side benefit > is YARN scheduler can potentially do better bin packing. > - Disadvantages: > Hard for applications to add their own resource profiles. > 2) Do we really need mandatory resource profiles such as > minimum/maximum/default? > 3) Should we send resource profile name inside ResourceRequest, or should > client/AM translate it to resource and set it to the existing resource > fields? > 4) Related to above, should we allow resource overrides or client/AM should > send final resource to RM? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7292) Revisit Resource Profile Behavior
[ https://issues.apache.org/jira/browse/YARN-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16361948#comment-16361948 ] Sunil G commented on YARN-7292: --- Thanks [~leftnoteasy]. I am not very sure about that option "do-not-override" is needed, as we have some cleaner option like zero capability from utils (or from server). > Revisit Resource Profile Behavior > - > > Key: YARN-7292 > URL: https://issues.apache.org/jira/browse/YARN-7292 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Blocker > Attachments: YARN-7292.002.patch, YARN-7292.003.patch, > YARN-7292.004.patch, YARN-7292.005.patch, YARN-7292.006.patch, > YARN-7292.wip.001.patch > > > Had discussions with [~templedf], [~vvasudev], [~sunilg] offline. There're a > couple of resource profile related behaviors might need to be updated: > 1) Configure resource profile in server side or client side: > Currently resource profile can be only configured centrally: > - Advantages: > A given resource profile has a the same meaning in the cluster. It won’t > change when we run different apps in different configurations. A job can run > under Amazon’s G2.8X can also run on YARN with G2.8X profile. A side benefit > is YARN scheduler can potentially do better bin packing. > - Disadvantages: > Hard for applications to add their own resource profiles. > 2) Do we really need mandatory resource profiles such as > minimum/maximum/default? > 3) Should we send resource profile name inside ResourceRequest, or should > client/AM translate it to resource and set it to the existing resource > fields? > 4) Related to above, should we allow resource overrides or client/AM should > send final resource to RM? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7925) Some NPE errors caused a display errors when setting node labels
[ https://issues.apache.org/jira/browse/YARN-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jinjiang Ling updated YARN-7925: Description: I'm trying to using the node label with latest hadoop (3.1.0-SNAPSHOT). But when I add a new node label and append a nodemanager to it, sometimes it may cause a display error. !DisplayError.png|width=573,height=188! Then I found *when there is no queues can access to the label*, this error will happen. After checking the log, I find some NPE errors. {quote}.. Caused by: java.lang.NullPointerException at org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo.toString(ResourceInfo.java:73) at org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:160) .. {quote} was: I'm trying to using the node label with latest hadoop (3.1.0-SNAPSHOT). But when I add a new node label and append a nodemanager to it, sometimes it may cause a display error. !image-2018-02-13-07-57-39-128.png! Then I found *when there is no queues can access to the label*, this error will happen. After checking the log, I find some NPE errors. {quote} .. Caused by: java.lang.NullPointerException at org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo.toString(ResourceInfo.java:73) at org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:160) .. {quote} > Some NPE errors caused a display errors when setting node labels > > > Key: YARN-7925 > URL: https://issues.apache.org/jira/browse/YARN-7925 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Jinjiang Ling >Assignee: Jinjiang Ling >Priority: Major > Attachments: DisplayError.png > > > I'm trying to using the node label with latest hadoop (3.1.0-SNAPSHOT). But > when I add a new node label and append a nodemanager to it, sometimes it may > cause a display error. > !DisplayError.png|width=573,height=188! > Then I found *when there is no queues can access to the label*, this error > will happen. > After checking the log, I find some NPE errors. > {quote}.. > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo.toString(ResourceInfo.java:73) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:160) > .. > {quote} > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7925) Some NPE errors caused a display errors when setting node labels
[ https://issues.apache.org/jira/browse/YARN-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jinjiang Ling updated YARN-7925: Attachment: DisplayError.png > Some NPE errors caused a display errors when setting node labels > > > Key: YARN-7925 > URL: https://issues.apache.org/jira/browse/YARN-7925 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Jinjiang Ling >Assignee: Jinjiang Ling >Priority: Major > Attachments: DisplayError.png > > > I'm trying to using the node label with latest hadoop (3.1.0-SNAPSHOT). But > when I add a new node label and append a nodemanager to it, sometimes it may > cause a display error. > !image-2018-02-13-07-57-39-128.png! > Then I found *when there is no queues can access to the label*, this error > will happen. > After checking the log, I find some NPE errors. > {quote} > .. > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo.toString(ResourceInfo.java:73) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:160) > .. > {quote} > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7925) Some NPE errors caused a display errors when setting node labels
[ https://issues.apache.org/jira/browse/YARN-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jinjiang Ling updated YARN-7925: Attachment: YARN-7925.patch > Some NPE errors caused a display errors when setting node labels > > > Key: YARN-7925 > URL: https://issues.apache.org/jira/browse/YARN-7925 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Jinjiang Ling >Assignee: Jinjiang Ling >Priority: Major > Attachments: DisplayError.png > > > I'm trying to using the node label with latest hadoop (3.1.0-SNAPSHOT). But > when I add a new node label and append a nodemanager to it, sometimes it may > cause a display error. > !DisplayError.png|width=573,height=188! > Then I found *when there is no queues can access to the label*, this error > will happen. > After checking the log, I find some NPE errors. > {quote}.. > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo.toString(ResourceInfo.java:73) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:160) > .. > {quote} > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7789) Should fail RM if 3rd resource type is configured but RM uses DefaultResourceCalculator
[ https://issues.apache.org/jira/browse/YARN-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16361955#comment-16361955 ] Sunil G commented on YARN-7789: --- Patch looks fine to me , but test case failure seems related. Could you please check [~Zian Chen] > Should fail RM if 3rd resource type is configured but RM uses > DefaultResourceCalculator > --- > > Key: YARN-7789 > URL: https://issues.apache.org/jira/browse/YARN-7789 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sumana Sathish >Assignee: Zian Chen >Priority: Critical > Attachments: YARN-7789.001.patch, YARN-7789.002.patch > > > We may need to revisit this behavior: Currently, RM doesn't fail if 3rd > resource type is configured, allocated containers will be automatically > assigned minimum allocation for all resource types except memory, this makes > really hard for troubleshooting. I prefer to fail RM if 3rd or more resource > type is configured inside resource-types.xml. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7925) Some NPE errors caused a display errors when setting node labels
[ https://issues.apache.org/jira/browse/YARN-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jinjiang Ling updated YARN-7925: Attachment: (was: YARN-7925.patch) > Some NPE errors caused a display errors when setting node labels > > > Key: YARN-7925 > URL: https://issues.apache.org/jira/browse/YARN-7925 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Jinjiang Ling >Assignee: Jinjiang Ling >Priority: Major > Attachments: DisplayError.png > > > I'm trying to using the node label with latest hadoop (3.1.0-SNAPSHOT). But > when I add a new node label and append a nodemanager to it, sometimes it may > cause a display error. > !DisplayError.png|width=573,height=188! > Then I found *when there is no queues can access to the label*, this error > will happen. > After checking the log, I find some NPE errors. > {quote}.. > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo.toString(ResourceInfo.java:73) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:160) > .. > {quote} > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7925) Some NPE errors caused a display errors when setting node labels
[ https://issues.apache.org/jira/browse/YARN-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jinjiang Ling updated YARN-7925: Attachment: YARN-7925.001.patch > Some NPE errors caused a display errors when setting node labels > > > Key: YARN-7925 > URL: https://issues.apache.org/jira/browse/YARN-7925 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Jinjiang Ling >Assignee: Jinjiang Ling >Priority: Major > Attachments: DisplayError.png, YARN-7925.001.patch > > > I'm trying to using the node label with latest hadoop (3.1.0-SNAPSHOT). But > when I add a new node label and append a nodemanager to it, sometimes it may > cause a display error. > !DisplayError.png|width=573,height=188! > Then I found *when there is no queues can access to the label*, this error > will happen. > After checking the log, I find some NPE errors. > {quote}.. > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo.toString(ResourceInfo.java:73) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:160) > .. > {quote} > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM
[ https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16361971#comment-16361971 ] Sunil G commented on YARN-6858: --- Thanks [~Naganarasimha] . In general patch looks fine to me. Some Minor nits. # {{RMLabel}} name seems a bit confusing. Its a property or a tag either indicating to a set of nodes or one node. So *GenericTagInfo* or something similar may be better. Because it consolidates all general properties associated with a tag (could be partition/label/attribute) # RMLabel#getResource is not cloning incoming resource when its created. So if this is a read-only param, and usage of the resource is lesser, we need not have to have a copy. Otherwise its better to do a clone. (if more op is on this getter, i prefer not clone and it associates a bit of performance cost) # RMAttributeNodeLabel: could it be just RMNodeAttribute ? # In RMAttributeNodeLabel, do we need to make that Set concurrent ? I can see that its used as a ConcurrentHashMap where it is consumed, but it could be referenced for metrics/REST rt. Are we planning to consume these objects via only Manager? # {{public Map getAttributesForNode}} is a bit confusing. However I can see the intent. For scheduler, value also is needed to do necessary type conversions. However for many metrics, we dont need value. So we can have another *getAttributesForNode* which returns a list of *NodeAttribute* alone too. > Attribute Manager to store and provide the attributes in RM > --- > > Key: YARN-6858 > URL: https://issues.apache.org/jira/browse/YARN-6858 > Project: Hadoop YARN > Issue Type: Sub-task > Components: api, capacityscheduler, client >Reporter: Naganarasimha G R >Assignee: Naganarasimha G R >Priority: Major > Attachments: YARN-6858-YARN-3409.001.patch, > YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, > YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch > > > Similar to CommonNodeLabelsManager we need to have a centralized manager for > Node Attributes too. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7925) Some NPE errors caused a display errors when setting node labels
[ https://issues.apache.org/jira/browse/YARN-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16361981#comment-16361981 ] Sunil G commented on YARN-7925: --- Thanks [~lingjinjiang] This is a problem. I checked patch. 1. PartitionQueueCapacitiesInfo change for getConfiguredMaxResource is added to show "unlimited" in the cs page.So if its an issue, then we can do null check in getConfiguredMaxResource. 2. Rather than fixing in CapacitySchedulerPage for *null* check, lets see whether we can handle this in specific logic in *PartitionQueueCapacitiesInfo.PartitionQueueCapacitiesInfo().* I think we can initialize all 4 objects (configuredMinResource; configuredMaxResource; effectiveMinResource; effectiveMaxResource) to Resource.newInstance(0,0) so that even we invoke {{QueueCapacitiesInfo.getPartitionQueueCapacitiesInfo(String partitionName)}} and the queue info is not configured for *partitionName* , then also we ll have valid resource objects initialized to 0. > Some NPE errors caused a display errors when setting node labels > > > Key: YARN-7925 > URL: https://issues.apache.org/jira/browse/YARN-7925 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Jinjiang Ling >Assignee: Jinjiang Ling >Priority: Major > Attachments: DisplayError.png, YARN-7925.001.patch > > > I'm trying to using the node label with latest hadoop (3.1.0-SNAPSHOT). But > when I add a new node label and append a nodemanager to it, sometimes it may > cause a display error. > !DisplayError.png|width=573,height=188! > Then I found *when there is no queues can access to the label*, this error > will happen. > After checking the log, I find some NPE errors. > {quote}.. > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo.toString(ResourceInfo.java:73) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:160) > .. > {quote} > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7925) Some NPE errors caused a display errors when setting node labels
[ https://issues.apache.org/jira/browse/YARN-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-7925: -- Priority: Blocker (was: Major) > Some NPE errors caused a display errors when setting node labels > > > Key: YARN-7925 > URL: https://issues.apache.org/jira/browse/YARN-7925 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Jinjiang Ling >Assignee: Jinjiang Ling >Priority: Blocker > Attachments: DisplayError.png, YARN-7925.001.patch > > > I'm trying to using the node label with latest hadoop (3.1.0-SNAPSHOT). But > when I add a new node label and append a nodemanager to it, sometimes it may > cause a display error. > !DisplayError.png|width=573,height=188! > Then I found *when there is no queues can access to the label*, this error > will happen. > After checking the log, I find some NPE errors. > {quote}.. > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo.toString(ResourceInfo.java:73) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:160) > .. > {quote} > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7925) Some NPE errors caused a display errors when setting node labels
[ https://issues.apache.org/jira/browse/YARN-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362001#comment-16362001 ] Jinjiang Ling commented on YARN-7925: - Thanks for your review, [~sunilg] I'll upload another patch later. > Some NPE errors caused a display errors when setting node labels > > > Key: YARN-7925 > URL: https://issues.apache.org/jira/browse/YARN-7925 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Jinjiang Ling >Assignee: Jinjiang Ling >Priority: Blocker > Attachments: DisplayError.png, YARN-7925.001.patch > > > I'm trying to using the node label with latest hadoop (3.1.0-SNAPSHOT). But > when I add a new node label and append a nodemanager to it, sometimes it may > cause a display error. > !DisplayError.png|width=573,height=188! > Then I found *when there is no queues can access to the label*, this error > will happen. > After checking the log, I find some NPE errors. > {quote}.. > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo.toString(ResourceInfo.java:73) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:160) > .. > {quote} > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7626) Allow regular expression matching in container-executor.cfg for devices and named docker volumes mount
[ https://issues.apache.org/jira/browse/YARN-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362008#comment-16362008 ] genericqa commented on YARN-7626: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 30m 13s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 39s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 34s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 63m 30s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7626 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12910347/YARN-7626.008.patch | | Optional Tests | asflicense compile cc mvnsite javac unit | | uname | Linux 9b042fad9c96 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0c5d7d7 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/19671/testReport/ | | Max. process+thread count | 341 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/19671/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Allow regular expression matching in container-executor.cfg for devices and > named docker volumes mount > -- > > Key: YARN-7626 > URL: https://issues.apache.org/jira/browse/YARN-7626 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Zian Chen >Assignee: Zian Chen >Priority: Major > Attachments: YARN-7626.001.patch, YARN-7626.002.patch, > YARN-7626.003.patch, YARN-7626.004.patch, YARN-7626.005.patch, > YARN-7626.006.patch, YARN-7626.007.patch, YARN-7626.008.patch > > > Currently when we config some of the GPU devices related fields (like ) in > container-executor.cfg, th
[jira] [Commented] (YARN-7920) Cleanup configuration of PlacementConstraints
[ https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362022#comment-16362022 ] genericqa commented on YARN-7920: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 10s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 38s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 57s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 9 new + 408 unchanged - 0 fixed = 417 total (was 408) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 40s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 39s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 29s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 19s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 28m 25s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 33s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}186m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSchedulingRequestUpdate | | | hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector | | | hadoop.yarn.client.api.impl.TestAMRMClientPlacementCons
[jira] [Commented] (YARN-7920) Cleanup configuration of PlacementConstraints
[ https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362027#comment-16362027 ] Wangda Tan commented on YARN-7920: -- Thanks [~sunilg], I plan to update documentation in a separate Jira or after general consensus reached for proposed changes. I will address rest of the comments in the next patch. > Cleanup configuration of PlacementConstraints > - > > Key: YARN-7920 > URL: https://issues.apache.org/jira/browse/YARN-7920 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Blocker > Attachments: YARN-7920.001.patch > > > Currently it is very confusing to have the two configs in two different files > (yarn-site.xml and capacity-scheduler.xml). > > Maybe a better approach is: we can delete the scheduling-request.allowed in > CS, and update placement-constraints configs in yarn-site.xml a bit: > > - Remove placement-constraints.enabled, and add a new > placement-constraints.handler, by default is none, and other acceptable > values are a. external-processor (since algorithm is too generic to me), b. > scheduler. > - And add a new PlacementProcessor just to pass SchedulingRequest to > scheduler without any modifications. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7626) Allow regular expression matching in container-executor.cfg for devices and named docker volumes mount
[ https://issues.apache.org/jira/browse/YARN-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362084#comment-16362084 ] Wangda Tan commented on YARN-7626: -- [~miklos.szeg...@cloudera.com], could you help to check the latest patch? > Allow regular expression matching in container-executor.cfg for devices and > named docker volumes mount > -- > > Key: YARN-7626 > URL: https://issues.apache.org/jira/browse/YARN-7626 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Zian Chen >Assignee: Zian Chen >Priority: Major > Attachments: YARN-7626.001.patch, YARN-7626.002.patch, > YARN-7626.003.patch, YARN-7626.004.patch, YARN-7626.005.patch, > YARN-7626.006.patch, YARN-7626.007.patch, YARN-7626.008.patch > > > Currently when we config some of the GPU devices related fields (like ) in > container-executor.cfg, these fields are generated based on different driver > versions or GPU device names. We want to enable regular expression matching > so that user don't need to manually set up these fields when config > container-executor.cfg, -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7881) Add Log Aggregation Status API to the RM Webservice
[ https://issues.apache.org/jira/browse/YARN-7881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gergely Novák updated YARN-7881: Attachment: YARN-7881.004.patch > Add Log Aggregation Status API to the RM Webservice > --- > > Key: YARN-7881 > URL: https://issues.apache.org/jira/browse/YARN-7881 > Project: Hadoop YARN > Issue Type: New Feature > Components: yarn >Reporter: Gergely Novák >Assignee: Gergely Novák >Priority: Major > Attachments: YARN-7881.001.patch, YARN-7881.002.patch, > YARN-7881.003.patch, YARN-7881.004.patch > > > The old YARN UI has a page: /cluster/logaggregationstatus/\{app_id} which > shows the log aggregation status for all the nodes that run containers for > the given application. In order to add a similar page to the new YARN UI we > need to add an RM WS endpoint first. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7881) Add Log Aggregation Status API to the RM Webservice
[ https://issues.apache.org/jira/browse/YARN-7881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362086#comment-16362086 ] Gergely Novák commented on YARN-7881: - Fixed the checkstyle warnings. > Add Log Aggregation Status API to the RM Webservice > --- > > Key: YARN-7881 > URL: https://issues.apache.org/jira/browse/YARN-7881 > Project: Hadoop YARN > Issue Type: New Feature > Components: yarn >Reporter: Gergely Novák >Assignee: Gergely Novák >Priority: Major > Attachments: YARN-7881.001.patch, YARN-7881.002.patch, > YARN-7881.003.patch, YARN-7881.004.patch > > > The old YARN UI has a page: /cluster/logaggregationstatus/\{app_id} which > shows the log aggregation status for all the nodes that run containers for > the given application. In order to add a similar page to the new YARN UI we > need to add an RM WS endpoint first. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7925) Some NPE errors caused a display errors when setting node labels
[ https://issues.apache.org/jira/browse/YARN-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362090#comment-16362090 ] genericqa commented on YARN-7925: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 27s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 41s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 40 unchanged - 0 fixed = 41 total (was 40) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 70m 13s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}124m 31s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7925 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12910352/YARN-7925.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8105ecc2be8d 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0c5d7d7 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/19672/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/19672/testReport/ | | Max. process+thread cou
[jira] [Updated] (YARN-6858) Attribute Manager to store and provide the attributes in RM
[ https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naganarasimha G R updated YARN-6858: Attachment: YARN-6858-YARN-3409.006.patch > Attribute Manager to store and provide the attributes in RM > --- > > Key: YARN-6858 > URL: https://issues.apache.org/jira/browse/YARN-6858 > Project: Hadoop YARN > Issue Type: Sub-task > Components: api, capacityscheduler, client >Reporter: Naganarasimha G R >Assignee: Naganarasimha G R >Priority: Major > Attachments: YARN-6858-YARN-3409.001.patch, > YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, > YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, > YARN-6858-YARN-3409.006.patch > > > Similar to CommonNodeLabelsManager we need to have a centralized manager for > Node Attributes too. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM
[ https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362106#comment-16362106 ] Naganarasimha G R commented on YARN-6858: - Hi [~sunilg], As discussed offline i have addressed your comments please review. > Attribute Manager to store and provide the attributes in RM > --- > > Key: YARN-6858 > URL: https://issues.apache.org/jira/browse/YARN-6858 > Project: Hadoop YARN > Issue Type: Sub-task > Components: api, capacityscheduler, client >Reporter: Naganarasimha G R >Assignee: Naganarasimha G R >Priority: Major > Attachments: YARN-6858-YARN-3409.001.patch, > YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, > YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, > YARN-6858-YARN-3409.006.patch > > > Similar to CommonNodeLabelsManager we need to have a centralized manager for > Node Attributes too. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM
[ https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362108#comment-16362108 ] Bibin A Chundatt commented on YARN-6858: Thanks [~Naganarasimha] for patch Few comments. {code} 107 initNodeLabelStore(getConfig()); protected void initNodeLabelStore(Configuration conf) throws Exception { // TODO to generalize and make use of the FileSystemNodeLabelsStore } {code} # Had an offline discussion with Sunil G we thought to using separate store for Nodelabels and Attributes enabled separately. # Event type used for registration is wrong should be {code} 112 if (null != dispatcher) { 113 dispatcher.register(NodeLabelsStoreEventType.class, 114 new ForwardingEventHandler()); 115 } {code} # Param name mismatch in following method {code} /** * @param nodeAttributeMappings * @param newAttributesToBeAdded * @return Map>, node -> Map( * NodeAttribute -> AttributeValue) * @throws IOException, on invalid mapping in the current request or against * already existing NodeAttributes. */ protected Map> validate( Map> nodeAttributeMapping, Map newAttributesToBeAdded, boolean isRemoveOperation) throws IOException {code} # Event Type wrong in {{ForwardingEventHandler}} {code} ForwardingEventHandler type is wrong {code} # InternalUpdateLabelsOnNodes rename to internalUpdateAttributesOnNodes # Currently manager doesnt provide a way to filter out nodes of type central, distributed type. I think we should provide that too > Attribute Manager to store and provide the attributes in RM > --- > > Key: YARN-6858 > URL: https://issues.apache.org/jira/browse/YARN-6858 > Project: Hadoop YARN > Issue Type: Sub-task > Components: api, capacityscheduler, client >Reporter: Naganarasimha G R >Assignee: Naganarasimha G R >Priority: Major > Attachments: YARN-6858-YARN-3409.001.patch, > YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, > YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, > YARN-6858-YARN-3409.006.patch > > > Similar to CommonNodeLabelsManager we need to have a centralized manager for > Node Attributes too. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier
[ https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362143#comment-16362143 ] Rohith Sharma K S commented on YARN-7919: - I did testing as well by applying this patch, some of the observations are # Good news is that table created with old jar in hbase and Hadoop using new jars perfectly works fine. # Sad news is that FlowRunTable creation always excepts co-processor jar to be present yarn.timeline-service.hbase.coprocessor.jar.hdfs.location location. This seems to be an issue during upgrade since it add that location in table descriptor. RS start failing with exception {code:java} 2018-02-13 12:27:06,239 ERROR [RS_OPEN_REGION-10.200.4.200:16020-1] handler.OpenRegionHandler: Failed open of region=prod.timelineservice.flowrun,,1518501821250.1e108e74a1f96a1fc799842720d78f4f., starting to roll back the global memstore size. java.io.FileNotFoundException: File file:/Users/rsharmaks/Cluster/atsv2/hadoop/share/hadoop/yarn/timelineservice/hadoop-yarn-server-timelineservice-hbase-3.1.0-SNAPSHOT.jar does not exist {code} On otherside, this was added to keep in co-processor jar in shared storage place so that all the RegionServer can load it dynamically. Given the condition where this jar is located in local filesystem or in hdfs, then we might need to provide additional steps to upgrade. Otherwise, RS can't be started. # Co processors are dynamically loaded after fix YARN-6094. But it is allowed only one jar file. Seems like HBASE-14548 is *NOT* fixed in HBase-1.2.x release line. But issue is with after refactoring code, we can't add single co-processor jar into flow run table. I am not sure is there any other way to achieve this from HBase-1.2.x release lines. Looks like after this refactoring, dynamic loading of co processor would be a problem. [~vrushalic] do you have any other way to solve this? [~haibochen] did you test this patch in fresh cluster by creating a table? Am I missing any steps for table creation? Comments # hadoop-yarn-server-timelineservice-hbase module as src folder structure. This can be removed. # FlowScanner#resetState java doc has params which are not in method. Params cell currentAggOp collectedButNotEmitted can be removed. > Split timelineservice-hbase module to make YARN-7346 easier > --- > > Key: YARN-7919 > URL: https://issues.apache.org/jira/browse/YARN-7919 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineservice >Affects Versions: 3.0.0 >Reporter: Haibo Chen >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-7919.00.patch, YARN-7919.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7920) Cleanup configuration of PlacementConstraints
[ https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362232#comment-16362232 ] Wangda Tan commented on YARN-7920: -- Attached patch (ver.2) addressed comments from Sunil. > Cleanup configuration of PlacementConstraints > - > > Key: YARN-7920 > URL: https://issues.apache.org/jira/browse/YARN-7920 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Blocker > Attachments: YARN-7920.001.patch, YARN-7920.002.patch > > > Currently it is very confusing to have the two configs in two different files > (yarn-site.xml and capacity-scheduler.xml). > > Maybe a better approach is: we can delete the scheduling-request.allowed in > CS, and update placement-constraints configs in yarn-site.xml a bit: > > - Remove placement-constraints.enabled, and add a new > placement-constraints.handler, by default is none, and other acceptable > values are a. external-processor (since algorithm is too generic to me), b. > scheduler. > - And add a new PlacementProcessor just to pass SchedulingRequest to > scheduler without any modifications. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7920) Cleanup configuration of PlacementConstraints
[ https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-7920: - Attachment: YARN-7920.002.patch > Cleanup configuration of PlacementConstraints > - > > Key: YARN-7920 > URL: https://issues.apache.org/jira/browse/YARN-7920 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Blocker > Attachments: YARN-7920.001.patch, YARN-7920.002.patch > > > Currently it is very confusing to have the two configs in two different files > (yarn-site.xml and capacity-scheduler.xml). > > Maybe a better approach is: we can delete the scheduling-request.allowed in > CS, and update placement-constraints configs in yarn-site.xml a bit: > > - Remove placement-constraints.enabled, and add a new > placement-constraints.handler, by default is none, and other acceptable > values are a. external-processor (since algorithm is too generic to me), b. > scheduler. > - And add a new PlacementProcessor just to pass SchedulingRequest to > scheduler without any modifications. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM
[ https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362296#comment-16362296 ] genericqa commented on YARN-6858: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} YARN-3409 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 27s{color} | {color:green} YARN-3409 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 34s{color} | {color:green} YARN-3409 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 6s{color} | {color:green} YARN-3409 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 14s{color} | {color:green} YARN-3409 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 46s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 35s{color} | {color:green} YARN-3409 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s{color} | {color:green} YARN-3409 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 45s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 0s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 9 new + 115 unchanged - 1 fixed = 124 total (was 116) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 4s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 5s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 24s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 12s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}175m 28s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodeLabels | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-6858 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12910367/YARN-6858-YARN-3409.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite
[jira] [Updated] (YARN-7916) Remove call to docker logs on failure in container-executor
[ https://issues.apache.org/jira/browse/YARN-7916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shane Kumpf updated YARN-7916: -- Attachment: YARN-7916.001.patch > Remove call to docker logs on failure in container-executor > --- > > Key: YARN-7916 > URL: https://issues.apache.org/jira/browse/YARN-7916 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Shane Kumpf >Assignee: Shane Kumpf >Priority: Major > Attachments: YARN-7916.001.patch > > > If a Docker container fails with a non-zero exit code, container-executor > attempts to run a {{docker logs --tail=250 container_name}} to provide more > details on why the container failed. While the idea is good, the current > implementation will fail for most containers as they are leveraging a launch > script whose output will be redirected to a file. The {{--tail}} option > throws an error if no log output is available for the container, resulting in > the docker logs command returning rc=1 in most cases. > I propose we remove this code from container-executor. Alternative approaches > to handle logging can be explored as part of supporting an image's entrypoint. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5028) RMStateStore should trim down app state for completed applications
[ https://issues.apache.org/jira/browse/YARN-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362390#comment-16362390 ] Gergo Repas commented on YARN-5028: --- [~yufeigu] The two fields that are needed for the unit tests to pass are: # AMContainerResourceRequests ** needed for multiple TestWorkPreservingRMRestart tests ** if this field is not present the following exception is being thrown: [https://github.com/apache/hadoop/blob/0c5d7d71a80bccd4ad7eab269d0727b999606a7e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java#L494] # ApplicationType ** needed for TestRMRestart ** the exception when this field is not present: {code}java.lang.NullPointerException at org.apache.hadoop.util.StringUtils.toLowerCase(StringUtils.java:1127) at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplications(ClientRMService.java:869) at org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartGetApplicationList(TestRMRestart.java:1017) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} > RMStateStore should trim down app state for completed applications > -- > > Key: YARN-5028 > URL: https://issues.apache.org/jira/browse/YARN-5028 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Affects Versions: 2.8.0 >Reporter: Karthik Kambatla >Assignee: Gergo Repas >Priority: Major > Attachments: YARN-5028.000.patch, YARN-5028.001.patch, > YARN-5028.002.patch > > > RMStateStore stores enough information to recover applications in case of a > restart. The store also retains this information for completed applications > to serve their status to REST, WebUI, Java and CLI clients. We don't need all > the information we store today to serve application status; for instance, we > don't need the {{ApplicationSubmissionContext}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7677) Docker image cannot set HADOOP_CONF_DIR
[ https://issues.apache.org/jira/browse/YARN-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Brennan updated YARN-7677: -- Attachment: YARN-7677.004.patch > Docker image cannot set HADOOP_CONF_DIR > --- > > Key: YARN-7677 > URL: https://issues.apache.org/jira/browse/YARN-7677 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Eric Badger >Assignee: Jim Brennan >Priority: Major > Attachments: YARN-7677.001.patch, YARN-7677.002.patch, > YARN-7677.003.patch, YARN-7677.004.patch > > > Currently, {{HADOOP_CONF_DIR}} is being put into the task environment whether > it's set by the user or not. It completely bypasses the whitelist and so > there is no way for a task to not have {{HADOOP_CONF_DIR}} set. This causes > problems in the Docker use case where Docker containers will set up their own > environment and have their own {{HADOOP_CONF_DIR}} preset in the image > itself. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7677) Docker image cannot set HADOOP_CONF_DIR
[ https://issues.apache.org/jira/browse/YARN-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362445#comment-16362445 ] Jim Brennan commented on YARN-7677: --- I submitted a new patch that addresses some of the style-check issues. The other failures appear to be unrelated temporary build issues. Hopefully those will not recur. I did not address the style-check issue of too many arguments for writeLaunchEnv() - adding an argument in this case seemed the most appropriate approach. > Docker image cannot set HADOOP_CONF_DIR > --- > > Key: YARN-7677 > URL: https://issues.apache.org/jira/browse/YARN-7677 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Eric Badger >Assignee: Jim Brennan >Priority: Major > Attachments: YARN-7677.001.patch, YARN-7677.002.patch, > YARN-7677.003.patch, YARN-7677.004.patch > > > Currently, {{HADOOP_CONF_DIR}} is being put into the task environment whether > it's set by the user or not. It completely bypasses the whitelist and so > there is no way for a task to not have {{HADOOP_CONF_DIR}} set. This causes > problems in the Docker use case where Docker containers will set up their own > environment and have their own {{HADOOP_CONF_DIR}} preset in the image > itself. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7916) Remove call to docker logs on failure in container-executor
[ https://issues.apache.org/jira/browse/YARN-7916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362485#comment-16362485 ] genericqa commented on YARN-7916: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 25m 44s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 10s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 32s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 58m 28s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7916 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12910389/YARN-7916.001.patch | | Optional Tests | asflicense compile cc mvnsite javac unit | | uname | Linux 7b0c5ea8e5c6 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0c5d7d7 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/19677/testReport/ | | Max. process+thread count | 440 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/19677/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Remove call to docker logs on failure in container-executor > --- > > Key: YARN-7916 > URL: https://issues.apache.org/jira/browse/YARN-7916 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Shane Kumpf >Assignee: Shane Kumpf >Priority: Major > Attachments: YARN-7916.001.patch > > > If a Docker container fails with a non-zero exit code, container-executor > attempts to run a {{docker logs --tail=250 container_name}} to provide more > details on why the container failed. While the idea is good, the
[jira] [Commented] (YARN-7920) Cleanup configuration of PlacementConstraints
[ https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362500#comment-16362500 ] genericqa commented on YARN-7920: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 43s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 12s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 9s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 15s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 3s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 8 new + 408 unchanged - 0 fixed = 416 total (was 408) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 17s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 44s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 15s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 65m 35s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 27m 39s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 33s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}173m 23s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.client.api.impl.TestAMRMClientPlacementConstraints | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7920 | | JIRA Patch
[jira] [Commented] (YARN-7677) Docker image cannot set HADOOP_CONF_DIR
[ https://issues.apache.org/jira/browse/YARN-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362603#comment-16362603 ] genericqa commented on YARN-7677: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 52s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 43s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 37s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 55s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 3 new + 164 unchanged - 2 fixed = 167 total (was 166) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 56s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 12s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 10s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 93m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing | | | hadoop.yarn.server.nodemanager.containermanager.TestContainerManager | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7677 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12910396/YARN-7677.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 1948c8cbd198 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit
[jira] [Commented] (YARN-7813) Capacity Scheduler Intra-queue Preemption should be configurable for each queue
[ https://issues.apache.org/jira/browse/YARN-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362615#comment-16362615 ] Hudson commented on YARN-7813: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13651 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13651/]) YARN-7813: Capacity Scheduler Intra-queue Preemption should be (epayne: rev c5e6e3de1c31eda052f89eddd7bba288625936b9) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestConfigurationMutationACLPolicies.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerApplicationAttempt.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/ProtocolHATestBase.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/QueueInfoPBImpl.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/IntraQueueCandidatesSelector.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/QueueCLI.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/QueueInfo.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPBImplRecords.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java > Capacity Scheduler Intra-queue Preemption should be configurable for each > queue > --- > > Key: YARN-7813 > URL: https://issues.apache.org/jira/browse/YARN-7813 > Project: Hadoop YARN > Issue Type: Improvement > Components: capacity scheduler, scheduler preemption >Affects Versions: 2.9.0, 2.8.3, 3.0.0 >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Major > Attachments: YARN-7813.001.patch, YARN-7813.002.patch > > > Just as inter-queue (a.k.a. cross-queue) preemption is configurable per > queue, intra-queue (a.k.a. in-queue) preemption should be configurable per > queue. If a queue does not have a setting for intra-queue preemption, it > should inherit its parents value. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7813) Capacity Scheduler Intra-queue Preemption should be configurable for each queue
[ https://issues.apache.org/jira/browse/YARN-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Payne updated YARN-7813: - Attachment: YARN-7813.002.branch-3.0.patch > Capacity Scheduler Intra-queue Preemption should be configurable for each > queue > --- > > Key: YARN-7813 > URL: https://issues.apache.org/jira/browse/YARN-7813 > Project: Hadoop YARN > Issue Type: Improvement > Components: capacity scheduler, scheduler preemption >Affects Versions: 2.9.0, 2.8.3, 3.0.0 >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Major > Attachments: YARN-7813.001.patch, YARN-7813.002.branch-3.0.patch, > YARN-7813.002.patch > > > Just as inter-queue (a.k.a. cross-queue) preemption is configurable per > queue, intra-queue (a.k.a. in-queue) preemption should be configurable per > queue. If a queue does not have a setting for intra-queue preemption, it > should inherit its parents value. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7813) Capacity Scheduler Intra-queue Preemption should be configurable for each queue
[ https://issues.apache.org/jira/browse/YARN-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362674#comment-16362674 ] Eric Payne commented on YARN-7813: -- Thanks [~jlowe]. I committed to trunk and branch-3.1. The patch does not cleanly backport to 3.0 or prior, so I am attaching patches for those. > Capacity Scheduler Intra-queue Preemption should be configurable for each > queue > --- > > Key: YARN-7813 > URL: https://issues.apache.org/jira/browse/YARN-7813 > Project: Hadoop YARN > Issue Type: Improvement > Components: capacity scheduler, scheduler preemption >Affects Versions: 2.9.0, 2.8.3, 3.0.0 >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Major > Attachments: YARN-7813.001.patch, YARN-7813.002.branch-3.0.patch, > YARN-7813.002.patch > > > Just as inter-queue (a.k.a. cross-queue) preemption is configurable per > queue, intra-queue (a.k.a. in-queue) preemption should be configurable per > queue. If a queue does not have a setting for intra-queue preemption, it > should inherit its parents value. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7677) Docker image cannot set HADOOP_CONF_DIR
[ https://issues.apache.org/jira/browse/YARN-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362696#comment-16362696 ] Jim Brennan commented on YARN-7677: --- I believe these are intermittent failures: {noformat} ERROR] Failures: [ERROR] TestContainerManager.testContainerUpgradeRollbackDueToFailure:880 The Rolled-back process should be a different pid. Actual: 17405 [ERROR] TestContainerSchedulerQueuing.testKillOpportunisticForGuaranteedContainer:547 expected: but was: {noformat} The check-style issues are due to adding an 8th argument to writeLaunchEnv(), and ContainerLaunch.call() going over 150 lines. I can remove an empty line to address that, but I'm not sure it's worth it? Aside from those issues, I think this Jira is ready to review. ([~jlowe], [~ebadger], [~shaneku...@gmail.com], [~billie.rinaldi]) > Docker image cannot set HADOOP_CONF_DIR > --- > > Key: YARN-7677 > URL: https://issues.apache.org/jira/browse/YARN-7677 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Eric Badger >Assignee: Jim Brennan >Priority: Major > Attachments: YARN-7677.001.patch, YARN-7677.002.patch, > YARN-7677.003.patch, YARN-7677.004.patch > > > Currently, {{HADOOP_CONF_DIR}} is being put into the task environment whether > it's set by the user or not. It completely bypasses the whitelist and so > there is no way for a task to not have {{HADOOP_CONF_DIR}} set. This causes > problems in the Docker use case where Docker containers will set up their own > environment and have their own {{HADOOP_CONF_DIR}} preset in the image > itself. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7926) Copy and paste errors in log messages
Zhenhao Li created YARN-7926: Summary: Copy and paste errors in log messages Key: YARN-7926 URL: https://issues.apache.org/jira/browse/YARN-7926 Project: Hadoop YARN Issue Type: Bug Reporter: Zhenhao Li We are a group of researchers from Canada and we are studying refactoring in log messages. We found that there are some possible copy and paste errors in the log messages, and we think it may cause some confusion when operators are reading the log messages. The problem is found in the following two methods: _org.apache.hadoop.yarn.server.router.webapp.FederationInterceptorREST._*_createNewApplication()_* and _org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor._*_getNewApplication()_* These are two very similar methods (possibly code clones) in different classes. The log messages in *both methods* are: _LOG.warn("Unable to_ *_create a new ApplicationId_* _in SubCluster " …);__ _ _LOG.debug(“_*getNewApplication* try # + … ); Since one method is getting new application and one method is creating new application, we believe that the log messages are incorrectly copied and should be changed. Please let us know if there is anything that we can further provide you with fixing the problem. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7926) Copy and paste errors in log messages
[ https://issues.apache.org/jira/browse/YARN-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhenhao Li updated YARN-7926: - Affects Version/s: 3.0.0 > Copy and paste errors in log messages > - > > Key: YARN-7926 > URL: https://issues.apache.org/jira/browse/YARN-7926 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Zhenhao Li >Priority: Minor > Labels: easyfix > > We are a group of researchers from Canada and we are studying refactoring in > log messages. We found that there are some possible copy and paste errors in > the log messages, and we think it may cause some confusion when operators are > reading the log messages. > The problem is found in the following two methods: > _org.apache.hadoop.yarn.server.router.webapp.FederationInterceptorREST._*_createNewApplication()_* > > and > _org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor._*_getNewApplication()_* > These are two very similar methods (possibly code clones) in different > classes. > The log messages in *both methods* are: > _LOG.warn("Unable to_ *_create a new ApplicationId_* _in SubCluster " …);__ > _ _LOG.debug(“_*getNewApplication* try # + … ); > Since one method is getting new application and one method is creating new > application, we believe that the log messages are incorrectly copied and > should be changed. > Please let us know if there is anything that we can further provide you with > fixing the problem. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7926) Copy and paste errors in log messages
[ https://issues.apache.org/jira/browse/YARN-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhenhao Li updated YARN-7926: - Description: We are a group of researchers from Canada and we are studying refactoring in log messages. We found that there are some possible copy and paste errors in the log messages, and we think it may cause some confusion when operators are reading the log messages. The problem is found in the following two methods: _org.apache.hadoop.yarn.server.router.webapp.FederationInterceptorREST._*_createNewApplication()_* and _org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor._*_getNewApplication()_* These are two very similar methods (possibly code clones) in different classes. The log messages in *both methods* are: _LOG.warn("Unable to_ *_create a new ApplicationId_* _in SubCluster " …);__ _ _LOG.debug(“_*getNewApplication* try # + … ); Since one method is getting new application and one method is creating new application, we believe that the log messages are incorrectly copied and should be changed. Please let us know if there is anything that we can further provide you with fixing the problem. was: We are a group of researchers from Canada and we are studying refactoring in log messages. We found that there are some possible copy and paste errors in the log messages, and we think it may cause some confusion when operators are reading the log messages. The problem is found in the following two methods: _org.apache.hadoop.yarn.server.router.webapp.FederationInterceptorREST._*_createNewApplication()_* and _org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor._*_getNewApplication()_* These are two very similar methods (possibly code clones) in different classes. The log messages in *both methods* are: _LOG.warn("Unable to_ *_create a new ApplicationId_* _in SubCluster " …);__ _ _LOG.debug(“_*getNewApplication* try # + … ); Since one method is getting new application and one method is creating new application, we believe that the log messages are incorrectly copied and should be changed. Please let us know if there is anything that we can further provide you with fixing the problem. > Copy and paste errors in log messages > - > > Key: YARN-7926 > URL: https://issues.apache.org/jira/browse/YARN-7926 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Zhenhao Li >Priority: Minor > Labels: easyfix > > We are a group of researchers from Canada and we are studying refactoring in > log messages. We found that there are some possible copy and paste errors in > the log messages, and we think it may cause some confusion when operators are > reading the log messages. > > The problem is found in the following two methods: > _org.apache.hadoop.yarn.server.router.webapp.FederationInterceptorREST._*_createNewApplication()_* > > and > _org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor._*_getNewApplication()_* > These are two very similar methods (possibly code clones) in different > classes. > The log messages in *both methods* are: > _LOG.warn("Unable to_ *_create a new ApplicationId_* _in SubCluster " …);__ > _ _LOG.debug(“_*getNewApplication* try # + … ); > Since one method is getting new application and one method is creating new > application, we believe that the log messages are incorrectly copied and > should be changed. > Please let us know if there is anything that we can further provide you with > fixing the problem. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7926) Copy and paste errors in log messages
[ https://issues.apache.org/jira/browse/YARN-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhenhao Li updated YARN-7926: - Description: We are a group of researchers from Canada and we are studying refactoring in log messages. We found that there are some possible copy and paste errors in the log messages, and we think it may cause some confusion when operators are reading the log messages. The problem is found in the following two methods: _org.apache.hadoop.yarn.server.router.webapp.FederationInterceptorREST._*_createNewApplication()_* and _org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor._*_getNewApplication()_* These are two very similar methods (possibly code clones) in different classes. The log messages in *both methods* are: _LOG.warn("Unable to_ *_create a new ApplicationId_* _in SubCluster " …);_ _LOG.debug(“_*getNewApplication* try # + … ); Since one method is getting new application and one method is creating new application, we believe that the log messages are incorrectly copied and should be changed. Please let us know if there is anything that we can further provide you with fixing the problem. was: We are a group of researchers from Canada and we are studying refactoring in log messages. We found that there are some possible copy and paste errors in the log messages, and we think it may cause some confusion when operators are reading the log messages. The problem is found in the following two methods: _org.apache.hadoop.yarn.server.router.webapp.FederationInterceptorREST._*_createNewApplication()_* and _org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor._*_getNewApplication()_* These are two very similar methods (possibly code clones) in different classes. The log messages in *both methods* are: _LOG.warn("Unable to_ *_create a new ApplicationId_* _in SubCluster " …);__ _ _LOG.debug(“_*getNewApplication* try # + … ); Since one method is getting new application and one method is creating new application, we believe that the log messages are incorrectly copied and should be changed. Please let us know if there is anything that we can further provide you with fixing the problem. > Copy and paste errors in log messages > - > > Key: YARN-7926 > URL: https://issues.apache.org/jira/browse/YARN-7926 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Zhenhao Li >Priority: Minor > Labels: easyfix > > We are a group of researchers from Canada and we are studying refactoring in > log messages. We found that there are some possible copy and paste errors in > the log messages, and we think it may cause some confusion when operators are > reading the log messages. > > The problem is found in the following two methods: > _org.apache.hadoop.yarn.server.router.webapp.FederationInterceptorREST._*_createNewApplication()_* > > and > > _org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor._*_getNewApplication()_* > These are two very similar methods (possibly code clones) in different > classes. > The log messages in *both methods* are: > _LOG.warn("Unable to_ *_create a new ApplicationId_* _in SubCluster " …);_ > _LOG.debug(“_*getNewApplication* try # + … ); > Since one method is getting new application and one method is creating new > application, we believe that the log messages are incorrectly copied and > should be changed. > Please let us know if there is anything that we can further provide you with > fixing the problem. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7221) Add security check for privileged docker container
[ https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362741#comment-16362741 ] Eric Yang commented on YARN-7221: - [~ebadger] Thank you for the review. Our decision was to run docker as root put making the localized directory read-only (YARN-7904). They can bind mount data directories for multi-user docker image to reflect file permission properly for trusted image. We need to validate that localized directory can be read-only for root. You are right about uid:gid pair is handled in the Java layer. I will rebase the code to handle this correctly. > Add security check for privileged docker container > -- > > Key: YARN-7221 > URL: https://issues.apache.org/jira/browse/YARN-7221 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Eric Yang >Assignee: Eric Yang >Priority: Major > Attachments: YARN-7221.001.patch, YARN-7221.002.patch, > YARN-7221.003.patch, YARN-7221.004.patch > > > When a docker is running with privileges, majority of the use case is to have > some program running with root then drop privileges to another user. i.e. > httpd to start with privileged and bind to port 80, then drop privileges to > www user. > # We should add security check for submitting users, to verify they have > "sudo" access to run privileged container. > # We should remove --user=uid:gid for privileged containers. > > Docker can be launched with --privileged=true, and --user=uid:gid flag. With > this parameter combinations, user will not have access to become root user. > All docker exec command will be drop to uid:gid user to run instead of > granting privileges. User can gain root privileges if container file system > contains files that give user extra power, but this type of image is > considered as dangerous. Non-privileged user can launch container with > special bits to acquire same level of root power. Hence, we lose control of > which image should be run with --privileges, and who have sudo rights to use > privileged container images. As the result, we should check for sudo access > then decide to parameterize --privileged=true OR --user=uid:gid. This will > avoid leading developer down the wrong path. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier
[ https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362740#comment-16362740 ] Rohith Sharma K S commented on YARN-7919: - I had chat with HBase members [~elserj] offline, he suggested to build a fat timelineservice-hbase-server jar which can be used only by HBase co-processor. This suggestion make sense to me. Lets have code base in modular split, but while building timelineservice-hbase-server jar, build it as fat jar. [~haibochen] would you incorporate this change as well so that we don't break anything backward compatible? > Split timelineservice-hbase module to make YARN-7346 easier > --- > > Key: YARN-7919 > URL: https://issues.apache.org/jira/browse/YARN-7919 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineservice >Affects Versions: 3.0.0 >Reporter: Haibo Chen >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-7919.00.patch, YARN-7919.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier
[ https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362746#comment-16362746 ] Haibo Chen commented on YARN-7919: -- Yes, I was working on it with the maven-assembly-plugin. Will upload a new patch after I do some tests > Split timelineservice-hbase module to make YARN-7346 easier > --- > > Key: YARN-7919 > URL: https://issues.apache.org/jira/browse/YARN-7919 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineservice >Affects Versions: 3.0.0 >Reporter: Haibo Chen >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-7919.00.patch, YARN-7919.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7446) Docker container privileged mode and --user flag contradict each other
[ https://issues.apache.org/jira/browse/YARN-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362747#comment-16362747 ] Eric Yang commented on YARN-7446: - [~ebadger] uid:gid will only set primary group. {{\-\-group-add}} call is still required for adding membership of secondary groups to ensure the container user have the exact same rights as he does in host system. Given this reason, do we need to remove {{\-\-group-add}}? > Docker container privileged mode and --user flag contradict each other > -- > > Key: YARN-7446 > URL: https://issues.apache.org/jira/browse/YARN-7446 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Eric Yang >Assignee: Eric Yang >Priority: Major > Attachments: YARN-7446.001.patch, YARN-7446.002.patch > > > In the current implementation, when privileged=true, --user flag is also > passed to docker for launching container. In reality, the container has no > way to use root privileges unless there is sticky bit or sudoers in the image > for the specified user to gain privileges again. To avoid duplication of > dropping and reacquire root privileges, we can reduce the duplication of > specifying both flag. When privileged mode is enabled, --user flag should be > omitted. When non-privileged mode is enabled, --user flag is supplied. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7900) [AMRMProxy] AMRMClientRelayer for stateful FederationInterceptor
[ https://issues.apache.org/jira/browse/YARN-7900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Botong Huang updated YARN-7900: --- Attachment: YARN-7900.v2.patch > [AMRMProxy] AMRMClientRelayer for stateful FederationInterceptor > > > Key: YARN-7900 > URL: https://issues.apache.org/jira/browse/YARN-7900 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Major > Attachments: YARN-7900.v1.patch, YARN-7900.v2.patch > > > Inside stateful FederationInterceptor (YARN-7899), we need a component > similar to AMRMClient that remembers all pending (outstands) requests we've > sent to YarnRM, auto re-register and do full pending resend when YarnRM fails > over and throws ApplicationMasterNotRegisteredException back. This JIRA adds > this component as AMRMClientRelayer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5028) RMStateStore should trim down app state for completed applications
[ https://issues.apache.org/jira/browse/YARN-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362817#comment-16362817 ] Yufei Gu commented on YARN-5028: Thanks [~grepas]. Yes. We do need these two fields. {code} this.applicationACLsManager.addApplication(applicationId, submissionContext.getAMContainerSpec().getApplicationACLs()); {code} The code above in method {{createAndPopulateNewRMApp}} shows that getAMContainerSpec() shouldn't be null, which leads to one question in the patch: {code} if (srcCtx.getAMContainerSpec() != null) { context.setAMContainerSpec(new ContainerLaunchContextPBImpl()); } {code} If srcCtx.getAMContainerSpec() is null, then context.getAMContainerSpec() is null, which causes NPE. Another question is why use a new AMContainerSpec instance for context instead of using the one of srcCtx? > RMStateStore should trim down app state for completed applications > -- > > Key: YARN-5028 > URL: https://issues.apache.org/jira/browse/YARN-5028 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Affects Versions: 2.8.0 >Reporter: Karthik Kambatla >Assignee: Gergo Repas >Priority: Major > Attachments: YARN-5028.000.patch, YARN-5028.001.patch, > YARN-5028.002.patch > > > RMStateStore stores enough information to recover applications in case of a > restart. The store also retains this information for completed applications > to serve their status to REST, WebUI, Java and CLI clients. We don't need all > the information we store today to serve application status; for instance, we > don't need the {{ApplicationSubmissionContext}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7446) Docker container privileged mode and --user flag contradict each other
[ https://issues.apache.org/jira/browse/YARN-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362853#comment-16362853 ] Eric Badger commented on YARN-7446: --- I don't see how that adds up though. The user is root, so they have all the privileges they need. If we're assuming that they need to be in a certain group, then how can we assume that they don't need the primary group? Is there a reason that they should have the additional groups but not the primary group? I think the answer is that if they need one, they need all. So we can either give them all or not give them any to be consistent. > Docker container privileged mode and --user flag contradict each other > -- > > Key: YARN-7446 > URL: https://issues.apache.org/jira/browse/YARN-7446 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Eric Yang >Assignee: Eric Yang >Priority: Major > Attachments: YARN-7446.001.patch, YARN-7446.002.patch > > > In the current implementation, when privileged=true, --user flag is also > passed to docker for launching container. In reality, the container has no > way to use root privileges unless there is sticky bit or sudoers in the image > for the specified user to gain privileges again. To avoid duplication of > dropping and reacquire root privileges, we can reduce the duplication of > specifying both flag. When privileged mode is enabled, --user flag should be > omitted. When non-privileged mode is enabled, --user flag is supplied. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7446) Docker container privileged mode and --user flag contradict each other
[ https://issues.apache.org/jira/browse/YARN-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362881#comment-16362881 ] Eric Yang commented on YARN-7446: - [~ebadger] I see what you are saying now. For root, there is no need to add user groups. You were right. I wasn't following correctly, and I will remove group-add accordingly. Thank you > Docker container privileged mode and --user flag contradict each other > -- > > Key: YARN-7446 > URL: https://issues.apache.org/jira/browse/YARN-7446 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Eric Yang >Assignee: Eric Yang >Priority: Major > Attachments: YARN-7446.001.patch, YARN-7446.002.patch > > > In the current implementation, when privileged=true, --user flag is also > passed to docker for launching container. In reality, the container has no > way to use root privileges unless there is sticky bit or sudoers in the image > for the specified user to gain privileges again. To avoid duplication of > dropping and reacquire root privileges, we can reduce the duplication of > specifying both flag. When privileged mode is enabled, --user flag should be > omitted. When non-privileged mode is enabled, --user flag is supplied. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7677) Docker image cannot set HADOOP_CONF_DIR
[ https://issues.apache.org/jira/browse/YARN-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362921#comment-16362921 ] Jason Lowe commented on YARN-7677: -- Thanks for updating the patch! Looks good overall, just a few nits: It might be useful to reduce the varname duplication in sanitizeEnv and help avoid future copy-n-paste errors by creating a small helper function that takes the NM var set, the env to update, the variable name, and the value and updates both the env and the nm var set. I don't think writeLaunchEnv should expect nmVars to be null. Then NM will always have at least one variable to set for each container (e.g.: CONTAINER_ID), so in practice this will never be null. It can only be null for tests, and I would argue the test code is responsible for passing something sane (e.g: Collections.emptySet()); sanitizeEnv and sanitizeWindowsEnv should take a Set rather than a LinkedHashSet. Those method implementations do not require the incoming set to be a LinkedHashSet for them to do what they do (even though in practice that is what it will be). > Docker image cannot set HADOOP_CONF_DIR > --- > > Key: YARN-7677 > URL: https://issues.apache.org/jira/browse/YARN-7677 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Eric Badger >Assignee: Jim Brennan >Priority: Major > Attachments: YARN-7677.001.patch, YARN-7677.002.patch, > YARN-7677.003.patch, YARN-7677.004.patch > > > Currently, {{HADOOP_CONF_DIR}} is being put into the task environment whether > it's set by the user or not. It completely bypasses the whitelist and so > there is no way for a task to not have {{HADOOP_CONF_DIR}} set. This causes > problems in the Docker use case where Docker containers will set up their own > environment and have their own {{HADOOP_CONF_DIR}} preset in the image > itself. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7813) Capacity Scheduler Intra-queue Preemption should be configurable for each queue
[ https://issues.apache.org/jira/browse/YARN-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362970#comment-16362970 ] genericqa commented on YARN-7813: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} branch-3.0 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 13s{color} | {color:green} branch-3.0 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 55s{color} | {color:green} branch-3.0 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} branch-3.0 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 30s{color} | {color:green} branch-3.0 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 1s{color} | {color:green} branch-3.0 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 14s{color} | {color:green} branch-3.0 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 43s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 52s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 9 new + 765 unchanged - 1 fixed = 774 total (was 766) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 27s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 34s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 52s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 10s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 24m 30s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} | | {color:green}+1{color}
[jira] [Updated] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier
[ https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-7919: - Attachment: YARN-7919.02.patch > Split timelineservice-hbase module to make YARN-7346 easier > --- > > Key: YARN-7919 > URL: https://issues.apache.org/jira/browse/YARN-7919 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineservice >Affects Versions: 3.0.0 >Reporter: Haibo Chen >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-7919.00.patch, YARN-7919.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier
[ https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-7919: - Attachment: (was: YARN-7919.02.patch) > Split timelineservice-hbase module to make YARN-7346 easier > --- > > Key: YARN-7919 > URL: https://issues.apache.org/jira/browse/YARN-7919 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineservice >Affects Versions: 3.0.0 >Reporter: Haibo Chen >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-7919.00.patch, YARN-7919.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier
[ https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-7919: - Attachment: YARN-7919.02.patch > Split timelineservice-hbase module to make YARN-7346 easier > --- > > Key: YARN-7919 > URL: https://issues.apache.org/jira/browse/YARN-7919 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineservice >Affects Versions: 3.0.0 >Reporter: Haibo Chen >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-7919.00.patch, YARN-7919.01.patch, > YARN-7919.02.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7677) Docker image cannot set HADOOP_CONF_DIR
[ https://issues.apache.org/jira/browse/YARN-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363006#comment-16363006 ] Jim Brennan commented on YARN-7677: --- Thanks for the review! I will address these issues and put up a new patch. > Docker image cannot set HADOOP_CONF_DIR > --- > > Key: YARN-7677 > URL: https://issues.apache.org/jira/browse/YARN-7677 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Eric Badger >Assignee: Jim Brennan >Priority: Major > Attachments: YARN-7677.001.patch, YARN-7677.002.patch, > YARN-7677.003.patch, YARN-7677.004.patch > > > Currently, {{HADOOP_CONF_DIR}} is being put into the task environment whether > it's set by the user or not. It completely bypasses the whitelist and so > there is no way for a task to not have {{HADOOP_CONF_DIR}} set. This causes > problems in the Docker use case where Docker containers will set up their own > environment and have their own {{HADOOP_CONF_DIR}} preset in the image > itself. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7789) Should fail RM if 3rd resource type is configured but RM uses DefaultResourceCalculator
[ https://issues.apache.org/jira/browse/YARN-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363014#comment-16363014 ] Zian Chen commented on YARN-7789: - [~sunilg] , Thank you for your comments. However, I couldn't see the reason why the failed case TestRMEmbeddedElector.testCallbackSynchronization is related to the patch. Could you explain in details why you think it's related? Thanks! > Should fail RM if 3rd resource type is configured but RM uses > DefaultResourceCalculator > --- > > Key: YARN-7789 > URL: https://issues.apache.org/jira/browse/YARN-7789 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sumana Sathish >Assignee: Zian Chen >Priority: Critical > Attachments: YARN-7789.001.patch, YARN-7789.002.patch > > > We may need to revisit this behavior: Currently, RM doesn't fail if 3rd > resource type is configured, allocated containers will be automatically > assigned minimum allocation for all resource types except memory, this makes > really hard for troubleshooting. I prefer to fail RM if 3rd or more resource > type is configured inside resource-types.xml. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7626) Allow regular expression matching in container-executor.cfg for devices and named docker volumes mount
[ https://issues.apache.org/jira/browse/YARN-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363078#comment-16363078 ] Miklos Szegedi commented on YARN-7626: -- Thank you for the patch [~Zian Chen]. {code:java} return !(len > 2 && str[0] == '^' && str[len-1] == '$');{code} Optional: I think this is still misleading. is_regex should return one on success and 0 on failure. {code:java} // Iterate each permitted values.{code} Optional: I think it would be better to write 'Iterate through each permitted value' {code:java} '/dev/nvidia1:/dev/nvidia1' if (prefix == 0) { ret = strcmp(values[i], permitted_values[j]); } else { // If permitted-Values[j] is a REGEX, use REGEX to compare if (is_regex(permitted_values[j]) == 0) { ret = validate_volume_name_with_argument(values[i], permitted_values[j]); } else { ret = strncmp(values[i], permitted_values[j], tmp_ptr - values[i]); } } {code} Technically the code, where prefix is not null including the regex match, should check only the characters before the prefix :. It is checking now the whole value[i], you should apply the regex only to [values[i] ... tmp_ptr]. {code:java} /** * Helper function to help normalize mounts for checking if mounts are * permitted. The function does the following - * 1. Find the canonical path for mount using realpath * 2. If the path is a directory, add a '/' at the end (if not present) * 3. Return a copy of the canonicalised path(to be freed by the caller) * @param mount path to be canonicalised * @return pointer to canonicalised path, NULL on error */ static char* normalize_mount(const char* mount, int isUserMount) { {code} There is no @param documentation for isUserMount, in fact I would name it isRegexAllowed to avoid confusion. {code:java} const char *container_executor_cfg_path = normalize_mount(get_config_path(""), 1);{code} I do not understand why the config path could be a regex. {code:java} tmp_path_buffer[0] = normalize_mount(mount_src, 1);{code} Should not this be 0, too? I have a few conceptual issues with the latest patch. # First of all, normalize_mounts, walks through the permitted mounts and it resolves symlinks but it does not resolve symlinks, if isUserMount (isRegex) is 1. What if the regex resolves to a symlink? I think it would probably be more future proof, if normalize_mounts applied the regex to the directory tree and then called the original normalize_mount on the resulting file names, that returns the real path for each. This would eliminate the need for passing isUserMount all the way through the call structure. It would also help to avoid issues that appear with invalid regexes, etc. # Technically a regex without the ^$ pair is a valid regex. It would be more precise and future proof to mark regexes with a prefix like {{regex:/dev/device[0-9]+}}. In this case we would not need to use just a subset for matching. > Allow regular expression matching in container-executor.cfg for devices and > named docker volumes mount > -- > > Key: YARN-7626 > URL: https://issues.apache.org/jira/browse/YARN-7626 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Zian Chen >Assignee: Zian Chen >Priority: Major > Attachments: YARN-7626.001.patch, YARN-7626.002.patch, > YARN-7626.003.patch, YARN-7626.004.patch, YARN-7626.005.patch, > YARN-7626.006.patch, YARN-7626.007.patch, YARN-7626.008.patch > > > Currently when we config some of the GPU devices related fields (like ) in > container-executor.cfg, these fields are generated based on different driver > versions or GPU device names. We want to enable regular expression matching > so that user don't need to manually set up these fields when config > container-executor.cfg, -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7900) [AMRMProxy] AMRMClientRelayer for stateful FederationInterceptor
[ https://issues.apache.org/jira/browse/YARN-7900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363100#comment-16363100 ] genericqa commented on YARN-7900: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 1s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 38s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 52s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 59s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 2 new + 63 unchanged - 0 fixed = 65 total (was 63) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 3s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 31s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 7s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 8s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 3s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 6s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 28m 20s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}203m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer | | | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities | |
[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier
[ https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363157#comment-16363157 ] genericqa commented on YARN-7919: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 19 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 7s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-assemblies hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 17s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 26s{color} | {color:orange} root: The patch generated 1 new + 29 unchanged - 9 fixed = 30 total (was 38) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 10s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 29s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-assemblies hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 21s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s{color} | {color:green} hadoop-assemblies in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 48s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 37s{color} | {col
[jira] [Assigned] (YARN-4946) RM should write out Aggregated Log Completion file flag next to logs
[ https://issues.apache.org/jira/browse/YARN-4946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen reassigned YARN-4946: Assignee: (was: Haibo Chen) > RM should write out Aggregated Log Completion file flag next to logs > > > Key: YARN-4946 > URL: https://issues.apache.org/jira/browse/YARN-4946 > Project: Hadoop YARN > Issue Type: Improvement > Components: log-aggregation >Affects Versions: 2.8.0 >Reporter: Robert Kanter >Priority: Major > > MAPREDUCE-6415 added a tool that combines the aggregated log files for each > Yarn App into a HAR file. When run, it seeds the list by looking at the > aggregated logs directory, and then filters out ineligible apps. One of the > criteria involves checking with the RM that an Application's log aggregation > status is not still running and has not failed. When the RM "forgets" about > an older completed Application (e.g. RM failover, enough time has passed, > etc), the tool won't find the Application in the RM and will just assume that > its log aggregation succeeded, even if it actually failed or is still running. > We can solve this problem by doing the following: > # When the RM sees that an Application has successfully finished aggregation > its logs, it will write a flag file next to that Application's log files > # The tool no longer talks to the RM at all. When looking at the FileSystem, > it now uses that flag file to determine if it should process those log files. > If the file is there, it archives, otherwise it does not. > # As part of the archiving process, it will delete the flag file > # (If you don't run the tool, the flag file will eventually be cleaned up by > the JHS when it cleans up the aggregated logs because it's in the same > directory) > This improvement has several advantages: > # The edge case about "forgotten" Applications is fixed > # The tool no longer has to talk to the RM; it only has to consult HDFS. > This is simpler -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-6586) YARN to facilitate HTTPS in AM web server
[ https://issues.apache.org/jira/browse/YARN-6586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen reassigned YARN-6586: Assignee: (was: Haibo Chen) > YARN to facilitate HTTPS in AM web server > - > > Key: YARN-6586 > URL: https://issues.apache.org/jira/browse/YARN-6586 > Project: Hadoop YARN > Issue Type: Improvement > Components: yarn >Affects Versions: 3.0.0-alpha2 >Reporter: Haibo Chen >Priority: Major > > MR AM today does not support HTTPS in its web server, so the traffic between > RMWebproxy and MR AM is in clear text. > MR cannot easily achieve this mainly because MR AMs are untrusted by YARN. A > potential solution purely within MR, similar to what Spark has implemented, > is to allow users, when they enable HTTPS in MR job, to provide their own > keystore file, and then the file is uploaded to distributed cache and > localized for MR AM container. The configuration users need to do is complex. > More importantly, in typical deployments, however, web browsers go through > RMWebProxy to indirectly access MR AM web server. In order to support MR AM > HTTPs, RMWebProxy therefore needs to trust the user-provided keystore, which > is problematic. > Alternatively, we can add an endpoint in NM web server that acts as a proxy > between AM web server and RMWebProxy. RMWebproxy, when configured to do so, > will send requests in HTTPS to the NM on which the AM is running, and the NM > then can communicate with the local AM web server in HTTP. This adds one > hop between RMWebproxy and AM, but both MR and Spark can use such solution. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7813) Capacity Scheduler Intra-queue Preemption should be configurable for each queue
[ https://issues.apache.org/jira/browse/YARN-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Payne updated YARN-7813: - Attachment: YARN-7813.003.branch-3.0.patch > Capacity Scheduler Intra-queue Preemption should be configurable for each > queue > --- > > Key: YARN-7813 > URL: https://issues.apache.org/jira/browse/YARN-7813 > Project: Hadoop YARN > Issue Type: Improvement > Components: capacity scheduler, scheduler preemption >Affects Versions: 2.9.0, 2.8.3, 3.0.0 >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Major > Attachments: YARN-7813.001.patch, YARN-7813.002.branch-3.0.patch, > YARN-7813.002.patch, YARN-7813.003.branch-3.0.patch > > > Just as inter-queue (a.k.a. cross-queue) preemption is configurable per > queue, intra-queue (a.k.a. in-queue) preemption should be configurable per > queue. If a queue does not have a setting for intra-queue preemption, it > should inherit its parents value. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7813) Capacity Scheduler Intra-queue Preemption should be configurable for each queue
[ https://issues.apache.org/jira/browse/YARN-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363207#comment-16363207 ] Eric Payne commented on YARN-7813: -- I was checking failed unit tests for {{YARN-7813.002.branch-3.0.patch}} and noticed that the {{TestRMWebServicesSchedulerActivities}} failures are caused by this patch. The others are not failing for me in my local repo. I have uploaded a new branch-3.0 patch (003). I will open a JIRA to fix it in trunk and 3.1. > Capacity Scheduler Intra-queue Preemption should be configurable for each > queue > --- > > Key: YARN-7813 > URL: https://issues.apache.org/jira/browse/YARN-7813 > Project: Hadoop YARN > Issue Type: Improvement > Components: capacity scheduler, scheduler preemption >Affects Versions: 2.9.0, 2.8.3, 3.0.0 >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Major > Attachments: YARN-7813.001.patch, YARN-7813.002.branch-3.0.patch, > YARN-7813.002.patch, YARN-7813.003.branch-3.0.patch > > > Just as inter-queue (a.k.a. cross-queue) preemption is configurable per > queue, intra-queue (a.k.a. in-queue) preemption should be configurable per > queue. If a queue does not have a setting for intra-queue preemption, it > should inherit its parents value. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7927) YARN-7813 caused test failure in TestRMWebServicesSchedulerActivities
Eric Payne created YARN-7927: Summary: YARN-7813 caused test failure in TestRMWebServicesSchedulerActivities Key: YARN-7927 URL: https://issues.apache.org/jira/browse/YARN-7927 Project: Hadoop YARN Issue Type: Bug Reporter: Eric Payne -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7928) [UI2] Components details not present for Yarn service with Yarn authentication
Yesha Vora created YARN-7928: Summary: [UI2] Components details not present for Yarn service with Yarn authentication Key: YARN-7928 URL: https://issues.apache.org/jira/browse/YARN-7928 Project: Hadoop YARN Issue Type: Bug Components: yarn-ui-v2 Reporter: Yesha Vora Scenario: Launch Hbase app in secure hadoop cluster where yarn UI authentication is enabled Validate Components page. Here, Component details are missing from UI {code:java} Failed to load http://xxx:8198/ws/v2/timeline/apps/application_1518564922635_0001/entities/SERVICE_ATTEMPT?fields=ALL&_=1518567830088: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://xxx:8088' is therefore not allowed access.{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7928) [UI2] Components details not present for Yarn service with Yarn authentication
[ https://issues.apache.org/jira/browse/YARN-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yesha Vora updated YARN-7928: - Affects Version/s: 3.0.0 > [UI2] Components details not present for Yarn service with Yarn > authentication > --- > > Key: YARN-7928 > URL: https://issues.apache.org/jira/browse/YARN-7928 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Affects Versions: 3.0.0 >Reporter: Yesha Vora >Priority: Major > > Scenario: > Launch Hbase app in secure hadoop cluster where yarn UI authentication is > enabled > Validate Components page. > Here, Component details are missing from UI > {code:java} > Failed to load > http://xxx:8198/ws/v2/timeline/apps/application_1518564922635_0001/entities/SERVICE_ATTEMPT?fields=ALL&_=1518567830088: > No 'Access-Control-Allow-Origin' header is present on the requested > resource. Origin 'http://xxx:8088' is therefore not allowed access.{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7921) Transform a PlacementConstraint to a string expression
[ https://issues.apache.org/jira/browse/YARN-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363329#comment-16363329 ] Konstantinos Karanasos commented on YARN-7921: -- Hi [~cheersyang], .bq what I was trying to do with this task is to implement 2). Which will be done by implementing {{AbstractConstraint#toString}} methods. Agreed. I was thinking that we could instead create a visitor, so that we can have different toString representations and we can also take advantage of the transformations more easily (for example you can first call a transformation and then do the string representation). But I guess we can start by just overriding the toString method. > Transform a PlacementConstraint to a string expression > -- > > Key: YARN-7921 > URL: https://issues.apache.org/jira/browse/YARN-7921 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Major > > Purpose: > Let placement constraint viewable on UI or log, e.g print app placement > constraint in RM app page. Help user to use constraints and analysis > placement issues easier. > Propose: > Like what was added for DS, toString is a reversed process of > {{PlacementConstraintParser}} that transforms a PlacementConstraint to a > string, using same syntax. E.g > {code} > AbstractConstraint constraintExpr = targetIn(NODE, allocationTag("hbase-m")); > constraint.toString(); > // This prints: IN,NODE,hbase-m > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7920) Cleanup configuration of PlacementConstraints
[ https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363348#comment-16363348 ] Konstantinos Karanasos commented on YARN-7920: -- Hi [~leftnoteasy], {quote}I would still prefer to "scheduler", otherwise it will be a duplicated config to yarn.resourcemanager.scheduler, and once FS want to support the feature, we need to add a new option and document, etc. {quote} Sure, makes sense. Re: the patch, I will check in more detail the implementation, but a first few comments about the naming: * The naming external processor is a bit redundant and not very descriptive. Let's call it {{PlacementConstraintProcessor}}, since this is what it does. * Similarly, in the comments of YarnConfiguration, "external which sits outside of the scheduler" is not very helpful about why this should be used. Let's say "Handle placement constraints by processor that is agnostic of the scheduler implementation". * Also, shall we call the {{NoneProcessor}} -> {{DefaultProcessor}} or something along these lines? * At some places you use the term "placement requests". Maybe say scheduling requests? Also, I agree with [~sunilg] to update the doc in the same Jira, it should be very few changes. I would also like to hear from [~asuresh], since he added the processor. > Cleanup configuration of PlacementConstraints > - > > Key: YARN-7920 > URL: https://issues.apache.org/jira/browse/YARN-7920 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Blocker > Attachments: YARN-7920.001.patch, YARN-7920.002.patch > > > Currently it is very confusing to have the two configs in two different files > (yarn-site.xml and capacity-scheduler.xml). > > Maybe a better approach is: we can delete the scheduling-request.allowed in > CS, and update placement-constraints configs in yarn-site.xml a bit: > > - Remove placement-constraints.enabled, and add a new > placement-constraints.handler, by default is none, and other acceptable > values are a. external-processor (since algorithm is too generic to me), b. > scheduler. > - And add a new PlacementProcessor just to pass SchedulingRequest to > scheduler without any modifications. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7929) SLS supports setting container execution
Jiandan Yang created YARN-7929: --- Summary: SLS supports setting container execution Key: YARN-7929 URL: https://issues.apache.org/jira/browse/YARN-7929 Project: Hadoop YARN Issue Type: New Feature Components: scheduler-load-simulator Reporter: Jiandan Yang Assignee: Jiandan Yang SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file can not set execution type of container. This jira will introduce execution type in SLS to help better simulation. RUMEN has default execution type GUARANTEED SYNTH set execution type by field map_execution_type and reduce_execution_type SLS set execution type by field container.execution_type For compatibility set GUARANTEED as default value when not setting above fields in trace file -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7929) SLS supports setting container execution
[ https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiandan Yang updated YARN-7929: Issue Type: Sub-task (was: New Feature) Parent: YARN-5065 > SLS supports setting container execution > > > Key: YARN-7929 > URL: https://issues.apache.org/jira/browse/YARN-7929 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Reporter: Jiandan Yang >Assignee: Jiandan Yang >Priority: Minor > > SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file > can not set execution type of container. > This jira will introduce execution type in SLS to help better simulation. > RUMEN has default execution type GUARANTEED > SYNTH set execution type by field map_execution_type and reduce_execution_type > SLS set execution type by field container.execution_type > For compatibility set GUARANTEED as default value when not setting above > fields in trace file -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7929) SLS supports setting container execution
[ https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiandan Yang updated YARN-7929: Attachment: YARN-7929.001.patch > SLS supports setting container execution > > > Key: YARN-7929 > URL: https://issues.apache.org/jira/browse/YARN-7929 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Reporter: Jiandan Yang >Assignee: Jiandan Yang >Priority: Minor > Attachments: YARN-7929.001.patch > > > SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file > can not set execution type of container. > This jira will introduce execution type in SLS to help better simulation. > RUMEN has default execution type GUARANTEED > SYNTH set execution type by field map_execution_type and reduce_execution_type > SLS set execution type by field container.execution_type > For compatibility set GUARANTEED as default value when not setting above > fields in trace file -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7929) SLS supports setting container execution
[ https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated YARN-7929: -- Priority: Major (was: Minor) > SLS supports setting container execution > > > Key: YARN-7929 > URL: https://issues.apache.org/jira/browse/YARN-7929 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Reporter: Jiandan Yang >Assignee: Jiandan Yang >Priority: Major > Attachments: YARN-7929.001.patch > > > SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file > can not set execution type of container. > This jira will introduce execution type in SLS to help better simulation. > RUMEN has default execution type GUARANTEED > SYNTH set execution type by field map_execution_type and reduce_execution_type > SLS set execution type by field container.execution_type > For compatibility set GUARANTEED as default value when not setting above > fields in trace file -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7789) Should fail RM if 3rd resource type is configured but RM uses DefaultResourceCalculator
[ https://issues.apache.org/jira/browse/YARN-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363383#comment-16363383 ] Sunil G commented on YARN-7789: --- Its my bad. I see that its failing for other tests as well. I ll raise a ticket for same. Meantime this patch seems good to go. > Should fail RM if 3rd resource type is configured but RM uses > DefaultResourceCalculator > --- > > Key: YARN-7789 > URL: https://issues.apache.org/jira/browse/YARN-7789 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sumana Sathish >Assignee: Zian Chen >Priority: Critical > Attachments: YARN-7789.001.patch, YARN-7789.002.patch > > > We may need to revisit this behavior: Currently, RM doesn't fail if 3rd > resource type is configured, allocated containers will be automatically > assigned minimum allocation for all resource types except memory, this makes > really hard for troubleshooting. I prefer to fail RM if 3rd or more resource > type is configured inside resource-types.xml. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7929) SLS supports setting container execution
[ https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated YARN-7929: -- Description: SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file can not set execution type of container. This jira will introduce execution type in SLS to help better simulation. This will help the perf testing with regarding to the Opportunistic Containers. RUMEN has default execution type GUARANTEED SYNTH set execution type by field map_execution_type and reduce_execution_type SLS set execution type by field container.execution_type For compatibility set GUARANTEED as default value when not setting above fields in trace file was: SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file can not set execution type of container. This jira will introduce execution type in SLS to help better simulation. RUMEN has default execution type GUARANTEED SYNTH set execution type by field map_execution_type and reduce_execution_type SLS set execution type by field container.execution_type For compatibility set GUARANTEED as default value when not setting above fields in trace file > SLS supports setting container execution > > > Key: YARN-7929 > URL: https://issues.apache.org/jira/browse/YARN-7929 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Reporter: Jiandan Yang >Assignee: Jiandan Yang >Priority: Major > Attachments: YARN-7929.001.patch > > > SLS currently support three tracetype, SYNTH, SLS and RUMEN, but trace file > can not set execution type of container. > This jira will introduce execution type in SLS to help better simulation. > This will help the perf testing with regarding to the Opportunistic > Containers. > RUMEN has default execution type GUARANTEED > SYNTH set execution type by field map_execution_type and > reduce_execution_type > SLS set execution type by field container.execution_type > For compatibility set GUARANTEED as default value when not setting above > fields in trace file -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7813) Capacity Scheduler Intra-queue Preemption should be configurable for each queue
[ https://issues.apache.org/jira/browse/YARN-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363389#comment-16363389 ] genericqa commented on YARN-7813: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | || || || || {color:brown} branch-3.0 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 3s{color} | {color:green} branch-3.0 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 4s{color} | {color:green} branch-3.0 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 20s{color} | {color:green} branch-3.0 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 52s{color} | {color:green} branch-3.0 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 8s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 16s{color} | {color:green} branch-3.0 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 59s{color} | {color:green} branch-3.0 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 14s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 17s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 9 new + 814 unchanged - 1 fixed = 823 total (was 815) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 49s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 40s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 46s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 8s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 23m 11s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} | | {color:green}+1{color}
[jira] [Updated] (YARN-7813) Capacity Scheduler Intra-queue Preemption should be configurable for each queue
[ https://issues.apache.org/jira/browse/YARN-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Payne updated YARN-7813: - Attachment: YARN-7813.003.branch-2.patch > Capacity Scheduler Intra-queue Preemption should be configurable for each > queue > --- > > Key: YARN-7813 > URL: https://issues.apache.org/jira/browse/YARN-7813 > Project: Hadoop YARN > Issue Type: Improvement > Components: capacity scheduler, scheduler preemption >Affects Versions: 2.9.0, 2.8.3, 3.0.0 >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Major > Attachments: YARN-7813.001.patch, YARN-7813.002.branch-3.0.patch, > YARN-7813.002.patch, YARN-7813.003.branch-2.patch, > YARN-7813.003.branch-3.0.patch > > > Just as inter-queue (a.k.a. cross-queue) preemption is configurable per > queue, intra-queue (a.k.a. in-queue) preemption should be configurable per > queue. If a queue does not have a setting for intra-queue preemption, it > should inherit its parents value. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7813) Capacity Scheduler Intra-queue Preemption should be configurable for each queue
[ https://issues.apache.org/jira/browse/YARN-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363419#comment-16363419 ] genericqa commented on YARN-7813: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 7m 41s{color} | {color:red} Docker failed to build yetus/hadoop:17213a0. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | YARN-7813 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12910494/YARN-7813.003.branch-2.patch | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/19684/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Capacity Scheduler Intra-queue Preemption should be configurable for each > queue > --- > > Key: YARN-7813 > URL: https://issues.apache.org/jira/browse/YARN-7813 > Project: Hadoop YARN > Issue Type: Improvement > Components: capacity scheduler, scheduler preemption >Affects Versions: 2.9.0, 2.8.3, 3.0.0 >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Major > Attachments: YARN-7813.001.patch, YARN-7813.002.branch-3.0.patch, > YARN-7813.002.patch, YARN-7813.003.branch-2.patch, > YARN-7813.003.branch-3.0.patch > > > Just as inter-queue (a.k.a. cross-queue) preemption is configurable per > queue, intra-queue (a.k.a. in-queue) preemption should be configurable per > queue. If a queue does not have a setting for intra-queue preemption, it > should inherit its parents value. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7929) SLS supports setting container execution
[ https://issues.apache.org/jira/browse/YARN-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363439#comment-16363439 ] genericqa commented on YARN-7929: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 3s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 29s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 25s{color} | {color:orange} hadoop-tools: The patch generated 22 new + 60 unchanged - 1 fixed = 82 total (was 61) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 27s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s{color} | {color:green} hadoop-rumen in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 51s{color} | {color:green} hadoop-sls in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 59m 39s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7929 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12910492/YARN-7929.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f5256ccdea8f 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 332269d | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/19683/artifact/out/diff-checkst
[jira] [Commented] (YARN-7328) ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to override yarn.nodemanager.resource.memory-mb and .cpu-vcores
[ https://issues.apache.org/jira/browse/YARN-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363536#comment-16363536 ] lovekesh bansal commented on YARN-7328: --- [~templedf] [~leftnoteasy] I think some of the things have changed since when the Jira was filed: # We are currently throwing that exception in checkMandatoryResources for if it contains memory. By making this change we will be limiting it to be defined in both resource-types.xml and node-resources.xml. Is that the intent here? # Now we have some more reosurces in MANDATORY_RESOURCES, so should not allow of them to be defined in node-resources.xml? # ResourceUtils.getNodeResourceInformation(conf) is called by NodeManagerHardwareUtils. getNodeResources which after calling function checks if(memResInfo.getValue() == 0) \{ret.setMemorySize(getContainerMemoryMB(conf))} So then if check would become redundant and we would always be setting the memory by calling ret.setMemorySize(getContainerMemoryMB(conf)) . Let me know your thoughts? > ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to > override yarn.nodemanager.resource.memory-mb and .cpu-vcores > -- > > Key: YARN-7328 > URL: https://issues.apache.org/jira/browse/YARN-7328 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Affects Versions: 3.1.0 >Reporter: Daniel Templeton >Priority: Critical > > We will throw an exception if yarn.nodemanager.resource-types.memory is > configured, but not if .memory-mb or .vcores is configured. We should be > consistent. We should not allow resource types to redefine something for > which we already have a property to set. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7328) ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to override yarn.nodemanager.resource.memory-mb and .cpu-vcores
[ https://issues.apache.org/jira/browse/YARN-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lovekesh bansal updated YARN-7328: -- Attachment: YARN-7328_trunk.001.patch > ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to > override yarn.nodemanager.resource.memory-mb and .cpu-vcores > -- > > Key: YARN-7328 > URL: https://issues.apache.org/jira/browse/YARN-7328 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Affects Versions: 3.1.0 >Reporter: Daniel Templeton >Priority: Critical > Attachments: YARN-7328_trunk.001.patch > > > We will throw an exception if yarn.nodemanager.resource-types.memory is > configured, but not if .memory-mb or .vcores is configured. We should be > consistent. We should not allow resource types to redefine something for > which we already have a property to set. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-7328) ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to override yarn.nodemanager.resource.memory-mb and .cpu-vcores
[ https://issues.apache.org/jira/browse/YARN-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lovekesh bansal reassigned YARN-7328: - Assignee: lovekesh bansal > ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to > override yarn.nodemanager.resource.memory-mb and .cpu-vcores > -- > > Key: YARN-7328 > URL: https://issues.apache.org/jira/browse/YARN-7328 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Affects Versions: 3.1.0 >Reporter: Daniel Templeton >Assignee: lovekesh bansal >Priority: Critical > Attachments: YARN-7328_trunk.001.patch > > > We will throw an exception if yarn.nodemanager.resource-types.memory is > configured, but not if .memory-mb or .vcores is configured. We should be > consistent. We should not allow resource types to redefine something for > which we already have a property to set. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7328) ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to override yarn.nodemanager.resource.memory-mb and .cpu-vcores
[ https://issues.apache.org/jira/browse/YARN-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363540#comment-16363540 ] lovekesh bansal commented on YARN-7328: --- I have attached the first patch with just disabling memory-mb and vcores. And since we are throwing exception for all mandatory resources, we should addMandatoryResources only after checkMandatoryResources in getNodeResourceInformation. > ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to > override yarn.nodemanager.resource.memory-mb and .cpu-vcores > -- > > Key: YARN-7328 > URL: https://issues.apache.org/jira/browse/YARN-7328 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Affects Versions: 3.1.0 >Reporter: Daniel Templeton >Assignee: lovekesh bansal >Priority: Critical > Attachments: YARN-7328_trunk.001.patch > > > We will throw an exception if yarn.nodemanager.resource-types.memory is > configured, but not if .memory-mb or .vcores is configured. We should be > consistent. We should not allow resource types to redefine something for > which we already have a property to set. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-7328) ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to override yarn.nodemanager.resource.memory-mb and .cpu-vcores
[ https://issues.apache.org/jira/browse/YARN-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363540#comment-16363540 ] lovekesh bansal edited comment on YARN-7328 at 2/14/18 6:16 AM: I have attached the first patch with just disabling memory-mb and vcores. And since we are throwing exception for all mandatory resources, we should addMandatoryResources only after checkMandatoryResources in getNodeResourceInformation. Please let me know your thoughts, accordingly I'll change the test cases. was (Author: lovekesh.bansal): I have attached the first patch with just disabling memory-mb and vcores. And since we are throwing exception for all mandatory resources, we should addMandatoryResources only after checkMandatoryResources in getNodeResourceInformation. > ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to > override yarn.nodemanager.resource.memory-mb and .cpu-vcores > -- > > Key: YARN-7328 > URL: https://issues.apache.org/jira/browse/YARN-7328 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Affects Versions: 3.1.0 >Reporter: Daniel Templeton >Assignee: lovekesh bansal >Priority: Critical > Attachments: YARN-7328_trunk.001.patch > > > We will throw an exception if yarn.nodemanager.resource-types.memory is > configured, but not if .memory-mb or .vcores is configured. We should be > consistent. We should not allow resource types to redefine something for > which we already have a property to set. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7789) Should fail RM if 3rd resource type is configured but RM uses DefaultResourceCalculator
[ https://issues.apache.org/jira/browse/YARN-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363547#comment-16363547 ] Zian Chen commented on YARN-7789: - Hi [~sunilg] , no problem, thank you so much for your response and help to review the patch. [~leftnoteasy] , do you think we can commit this? Thanks! > Should fail RM if 3rd resource type is configured but RM uses > DefaultResourceCalculator > --- > > Key: YARN-7789 > URL: https://issues.apache.org/jira/browse/YARN-7789 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sumana Sathish >Assignee: Zian Chen >Priority: Critical > Attachments: YARN-7789.001.patch, YARN-7789.002.patch > > > We may need to revisit this behavior: Currently, RM doesn't fail if 3rd > resource type is configured, allocated containers will be automatically > assigned minimum allocation for all resource types except memory, this makes > really hard for troubleshooting. I prefer to fail RM if 3rd or more resource > type is configured inside resource-types.xml. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier
[ https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363552#comment-16363552 ] Rohith Sharma K S commented on YARN-7919: - thanks haibo for the patch! I verified the co-processor jar in cluster and it seems perfectly fine. # With current patch, I see below jars in timelineservice folder. ## My view on this is lets NOT keep {color:red}hadoop-yarn-server-timelineservice-hbase-server-3.2.0-SNAPSHOT-coprocessor.jar{color} separately, rather we shall keep *co-processor* jar as {color:green}hadoop-yarn-server-timelineservice-hbase-server-3.2.0-SNAPSHOT.jar{color} itself that include common dependency as well since this jar is used in hbase server only. {code} hadoop-yarn-server-timelineservice-3.2.0-SNAPSHOT.jar hadoop-yarn-server-timelineservice-hbase-client-3.2.0-SNAPSHOT.jar hadoop-yarn-server-timelineservice-hbase-common-3.2.0-SNAPSHOT.jar hadoop-yarn-server-timelineservice-hbase-server-3.2.0-SNAPSHOT-coprocessor.jar hadoop-yarn-server-timelineservice-hbase-server-3.2.0-SNAPSHOT.jar lib test {code} ## After your current patch which generates jar with name co-processor, I was thinking about naming convention for co-processor jar. Should we name it as hadoop-yarn-server-timelineservice-hbase-coprocessor-3.2.0-SNAPSHOT.jar instead of hadoop-yarn-server-timelineservice-hbase-server-3.2.0-SNAPSHOT.jar? # nit : I do see src folder in inside hadoop-yarn-server-timelineservice-hbase package structure. Is this only to me or others as well? {code} hadoop-yarn-server-timelineservice-hbase-client hadoop-yarn-server-timelineservice-hbase-common hadoop-yarn-server-timelineservice-hbase-server hadoop-yarn-server-timelineservice-hbase.iml pom.xml src target {code} > Split timelineservice-hbase module to make YARN-7346 easier > --- > > Key: YARN-7919 > URL: https://issues.apache.org/jira/browse/YARN-7919 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineservice >Affects Versions: 3.0.0 >Reporter: Haibo Chen >Assignee: Haibo Chen >Priority: Major > Attachments: YARN-7919.00.patch, YARN-7919.01.patch, > YARN-7919.02.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7920) Cleanup configuration of PlacementConstraints
[ https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363557#comment-16363557 ] Wangda Tan commented on YARN-7920: -- Thanks [~kkaranasos] for review: {quote} * The naming external processor is a bit redundant and not very descriptive. Let's call it {{PlacementConstraintProcessor}}, since this is what it does.{quote} Updated, and renamed handler to "placement-processor". {quote} * Similarly, in the comments of YarnConfiguration, "external which sits outside of the scheduler" is not very helpful about why this should be used. Let's say "Handle placement constraints by processor that is agnostic of the scheduler implementation".{quote} I just copied contents from markdown file, please let me know if that looks better. This should not matter since this field is marked to \{{@Private}}. User should get the source of truth from official documentation. {quote} * Also, shall we call the {{NoneProcessor}} -> {{DefaultProcessor}} or something along these lines?{quote} Would prefer not, the "Default" is not meaningful, would prefer to keep "none" since it means "no handler to process the SchedulingRequest". {quote} * At some places you use the term "placement requests". Maybe say scheduling requests?{quote} Done. I just updated markdown doc (why it is using a non-standard markdown? Any advantage of this format?). Made changes to the whole "Enabling placement constraints" section according to the code changes. Please very carefully review this change and let me know if it looks good. Since this is blocker of 3.1.0, I would like to get this resolved by Friday. [~asuresh]/[~kkaranasos], could you help to review easier if possible? > Cleanup configuration of PlacementConstraints > - > > Key: YARN-7920 > URL: https://issues.apache.org/jira/browse/YARN-7920 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Blocker > Attachments: YARN-7920.001.patch, YARN-7920.002.patch > > > Currently it is very confusing to have the two configs in two different files > (yarn-site.xml and capacity-scheduler.xml). > > Maybe a better approach is: we can delete the scheduling-request.allowed in > CS, and update placement-constraints configs in yarn-site.xml a bit: > > - Remove placement-constraints.enabled, and add a new > placement-constraints.handler, by default is none, and other acceptable > values are a. external-processor (since algorithm is too generic to me), b. > scheduler. > - And add a new PlacementProcessor just to pass SchedulingRequest to > scheduler without any modifications. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-7920) Cleanup configuration of PlacementConstraints
[ https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363557#comment-16363557 ] Wangda Tan edited comment on YARN-7920 at 2/14/18 6:53 AM: --- Thanks [~kkaranasos] for review: {quote} * The naming external processor is a bit redundant and not very descriptive. Let's call it {{PlacementConstraintProcessor}}, since this is what it does.{quote} Updated, and renamed handler to "placement-processor". {quote} * Similarly, in the comments of YarnConfiguration, "external which sits outside of the scheduler" is not very helpful about why this should be used. Let's say "Handle placement constraints by processor that is agnostic of the scheduler implementation".{quote} I just copied contents from markdown file, please let me know if that looks better. This should not matter since this field is marked to {{@Private}}. User should get the source of truth from official documentation. {quote} * Also, shall we call the {{NoneProcessor}} -> {{DefaultProcessor}} or something along these lines?{quote} Would prefer not, the "Default" is not meaningful, would prefer to keep "none" since it means "no handler to process the SchedulingRequest". {quote} * At some places you use the term "placement requests". Maybe say scheduling requests?{quote} Done. I just updated markdown doc (why it is using a non-standard markdown? Any advantage of this format?). Made changes to the whole "Enabling placement constraints" section according to the code changes. Please very carefully review this change and let me know if it looks good. Since this is blocker of 3.1.0, I would like to get this resolved by Friday. [~asuresh]/[~kkaranasos], could you help to review easier if possible? Attached ver.3 patch. was (Author: leftnoteasy): Thanks [~kkaranasos] for review: {quote} * The naming external processor is a bit redundant and not very descriptive. Let's call it {{PlacementConstraintProcessor}}, since this is what it does.{quote} Updated, and renamed handler to "placement-processor". {quote} * Similarly, in the comments of YarnConfiguration, "external which sits outside of the scheduler" is not very helpful about why this should be used. Let's say "Handle placement constraints by processor that is agnostic of the scheduler implementation".{quote} I just copied contents from markdown file, please let me know if that looks better. This should not matter since this field is marked to \{{@Private}}. User should get the source of truth from official documentation. {quote} * Also, shall we call the {{NoneProcessor}} -> {{DefaultProcessor}} or something along these lines?{quote} Would prefer not, the "Default" is not meaningful, would prefer to keep "none" since it means "no handler to process the SchedulingRequest". {quote} * At some places you use the term "placement requests". Maybe say scheduling requests?{quote} Done. I just updated markdown doc (why it is using a non-standard markdown? Any advantage of this format?). Made changes to the whole "Enabling placement constraints" section according to the code changes. Please very carefully review this change and let me know if it looks good. Since this is blocker of 3.1.0, I would like to get this resolved by Friday. [~asuresh]/[~kkaranasos], could you help to review easier if possible? > Cleanup configuration of PlacementConstraints > - > > Key: YARN-7920 > URL: https://issues.apache.org/jira/browse/YARN-7920 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Blocker > Attachments: YARN-7920.001.patch, YARN-7920.002.patch, > YARN-7920.003.patch > > > Currently it is very confusing to have the two configs in two different files > (yarn-site.xml and capacity-scheduler.xml). > > Maybe a better approach is: we can delete the scheduling-request.allowed in > CS, and update placement-constraints configs in yarn-site.xml a bit: > > - Remove placement-constraints.enabled, and add a new > placement-constraints.handler, by default is none, and other acceptable > values are a. external-processor (since algorithm is too generic to me), b. > scheduler. > - And add a new PlacementProcessor just to pass SchedulingRequest to > scheduler without any modifications. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7920) Cleanup configuration of PlacementConstraints
[ https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-7920: - Attachment: YARN-7920.003.patch > Cleanup configuration of PlacementConstraints > - > > Key: YARN-7920 > URL: https://issues.apache.org/jira/browse/YARN-7920 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Blocker > Attachments: YARN-7920.001.patch, YARN-7920.002.patch, > YARN-7920.003.patch > > > Currently it is very confusing to have the two configs in two different files > (yarn-site.xml and capacity-scheduler.xml). > > Maybe a better approach is: we can delete the scheduling-request.allowed in > CS, and update placement-constraints configs in yarn-site.xml a bit: > > - Remove placement-constraints.enabled, and add a new > placement-constraints.handler, by default is none, and other acceptable > values are a. external-processor (since algorithm is too generic to me), b. > scheduler. > - And add a new PlacementProcessor just to pass SchedulingRequest to > scheduler without any modifications. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7292) Revisit Resource Profile Behavior
[ https://issues.apache.org/jira/browse/YARN-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363561#comment-16363561 ] Wangda Tan commented on YARN-7292: -- [~templedf], since this is a blocker of 3.1.0, we want to get this done by Friday, could you help to give some feedbacks. Will commit in 2 days if don't hear from you :). > Revisit Resource Profile Behavior > - > > Key: YARN-7292 > URL: https://issues.apache.org/jira/browse/YARN-7292 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Blocker > Attachments: YARN-7292.002.patch, YARN-7292.003.patch, > YARN-7292.004.patch, YARN-7292.005.patch, YARN-7292.006.patch, > YARN-7292.wip.001.patch > > > Had discussions with [~templedf], [~vvasudev], [~sunilg] offline. There're a > couple of resource profile related behaviors might need to be updated: > 1) Configure resource profile in server side or client side: > Currently resource profile can be only configured centrally: > - Advantages: > A given resource profile has a the same meaning in the cluster. It won’t > change when we run different apps in different configurations. A job can run > under Amazon’s G2.8X can also run on YARN with G2.8X profile. A side benefit > is YARN scheduler can potentially do better bin packing. > - Disadvantages: > Hard for applications to add their own resource profiles. > 2) Do we really need mandatory resource profiles such as > minimum/maximum/default? > 3) Should we send resource profile name inside ResourceRequest, or should > client/AM translate it to resource and set it to the existing resource > fields? > 4) Related to above, should we allow resource overrides or client/AM should > send final resource to RM? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7789) Should fail RM if 3rd resource type is configured but RM uses DefaultResourceCalculator
[ https://issues.apache.org/jira/browse/YARN-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363566#comment-16363566 ] Wangda Tan commented on YARN-7789: -- +1, thanks [~Zian Chen], will commit later today. > Should fail RM if 3rd resource type is configured but RM uses > DefaultResourceCalculator > --- > > Key: YARN-7789 > URL: https://issues.apache.org/jira/browse/YARN-7789 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sumana Sathish >Assignee: Zian Chen >Priority: Critical > Attachments: YARN-7789.001.patch, YARN-7789.002.patch > > > We may need to revisit this behavior: Currently, RM doesn't fail if 3rd > resource type is configured, allocated containers will be automatically > assigned minimum allocation for all resource types except memory, this makes > really hard for troubleshooting. I prefer to fail RM if 3rd or more resource > type is configured inside resource-types.xml. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7328) ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to override yarn.nodemanager.resource.memory-mb and .cpu-vcores
[ https://issues.apache.org/jira/browse/YARN-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363577#comment-16363577 ] Wangda Tan commented on YARN-7328: -- Thanks [~lovekesh.bansal], the uploaded change looks good to me. Could you update test cases accordingly? > ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to > override yarn.nodemanager.resource.memory-mb and .cpu-vcores > -- > > Key: YARN-7328 > URL: https://issues.apache.org/jira/browse/YARN-7328 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Affects Versions: 3.1.0 >Reporter: Daniel Templeton >Assignee: lovekesh bansal >Priority: Critical > Attachments: YARN-7328_trunk.001.patch > > > We will throw an exception if yarn.nodemanager.resource-types.memory is > configured, but not if .memory-mb or .vcores is configured. We should be > consistent. We should not allow resource types to redefine something for > which we already have a property to set. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7930) Add configuration to initialize RM with configured labels.
Abhishek Modi created YARN-7930: --- Summary: Add configuration to initialize RM with configured labels. Key: YARN-7930 URL: https://issues.apache.org/jira/browse/YARN-7930 Project: Hadoop YARN Issue Type: Sub-task Reporter: Abhishek Modi Assignee: Abhishek Modi At present, the only way to create labels is using admin API. Sometimes, there is a requirement to start the cluster with pre-configured node labels. This Jira introduces yarn configurations to start RM with predefined node labels. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7930) Add configuration to initialize RM with configured labels.
[ https://issues.apache.org/jira/browse/YARN-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abhishek Modi updated YARN-7930: Attachment: YARN-7930.001.patch > Add configuration to initialize RM with configured labels. > -- > > Key: YARN-7930 > URL: https://issues.apache.org/jira/browse/YARN-7930 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Abhishek Modi >Assignee: Abhishek Modi >Priority: Major > Attachments: YARN-7930.001.patch > > > At present, the only way to create labels is using admin API. Sometimes, > there is a requirement to start the cluster with pre-configured node labels. > This Jira introduces yarn configurations to start RM with predefined node > labels. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7930) Add configuration to initialize RM with configured labels.
[ https://issues.apache.org/jira/browse/YARN-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363580#comment-16363580 ] genericqa commented on YARN-7930: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s{color} | {color:red} YARN-7930 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | YARN-7930 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12910527/YARN-7930.001.patch | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/19686/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add configuration to initialize RM with configured labels. > -- > > Key: YARN-7930 > URL: https://issues.apache.org/jira/browse/YARN-7930 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Abhishek Modi >Assignee: Abhishek Modi >Priority: Major > Attachments: YARN-7930.001.patch > > > At present, the only way to create labels is using admin API. Sometimes, > there is a requirement to start the cluster with pre-configured node labels. > This Jira introduces yarn configurations to start RM with predefined node > labels. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7920) Cleanup configuration of PlacementConstraints
[ https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363595#comment-16363595 ] Arun Suresh commented on YARN-7920: --- Thanks for working on this [~leftnoteasy] Couple of comments: * the {{ amsProcessingChain.init(rmContext, null);}} call should be in the {{initializeProcessingChain}} method. * With regard to the {{SchedulerPlacementProcessor}}, we are assuming that if enabled, then placement constraints CANNOT be specified via the registerAM call.. Technically, you can still specify constraints in the register call - the schedulingRequest just overrides it. * I agree with [~kkaranasos], we should make the value of the handler something the user should make sense of. Given that the major difference between the two approaches are the fact that the scheduler still handles for request in priority order and the processor tries to optimize for placement, ignoring priority, maybe we should call it "priority-optimized" and "placement-optimized" ? Thoughts ? > Cleanup configuration of PlacementConstraints > - > > Key: YARN-7920 > URL: https://issues.apache.org/jira/browse/YARN-7920 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Blocker > Attachments: YARN-7920.001.patch, YARN-7920.002.patch, > YARN-7920.003.patch > > > Currently it is very confusing to have the two configs in two different files > (yarn-site.xml and capacity-scheduler.xml). > > Maybe a better approach is: we can delete the scheduling-request.allowed in > CS, and update placement-constraints configs in yarn-site.xml a bit: > > - Remove placement-constraints.enabled, and add a new > placement-constraints.handler, by default is none, and other acceptable > values are a. external-processor (since algorithm is too generic to me), b. > scheduler. > - And add a new PlacementProcessor just to pass SchedulingRequest to > scheduler without any modifications. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org