[jira] [Updated] (YARN-5728) TestMiniYarnClusterNodeUtilization.testUpdateNodeUtilization timeout
[ https://issues.apache.org/jira/browse/YARN-5728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated YARN-5728: Summary: TestMiniYarnClusterNodeUtilization.testUpdateNodeUtilization timeout (was: TestMiniYARNClusterNodeUtilization.testUpdateNodeUtilization timeout) > TestMiniYarnClusterNodeUtilization.testUpdateNodeUtilization timeout > > > Key: YARN-5728 > URL: https://issues.apache.org/jira/browse/YARN-5728 > Project: Hadoop YARN > Issue Type: Bug > Components: test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: YARN-5728.01.patch > > > TestMiniYARNClusterNodeUtilization.testUpdateNodeUtilization is failing by > timeout. > https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/192/testReport/junit/org.apache.hadoop.yarn.server/TestMiniYarnClusterNodeUtilization/testUpdateNodeUtilization/ > {noformat} > java.lang.Exception: test timed out after 6 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.processWaitTimeAndRetryInfo(RetryInvocationHandler.java:130) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:107) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) > at com.sun.proxy.$Proxy85.nodeHeartbeat(Unknown Source) > at > org.apache.hadoop.yarn.server.TestMiniYarnClusterNodeUtilization.testUpdateNodeUtilization(TestMiniYarnClusterNodeUtilization.java:113) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5737) [Atsv2] Add validation for entity fields that are published.
Rohith Sharma K S created YARN-5737: --- Summary: [Atsv2] Add validation for entity fields that are published. Key: YARN-5737 URL: https://issues.apache.org/jira/browse/YARN-5737 Project: Hadoop YARN Issue Type: Bug Reporter: Rohith Sharma K S The class TestSystemMetricsPublisherForV2 has test cases for publishing entities and validates entity with created time and its number of events. It would better to have validation for all the fields that are published under entity-infos , event-infos, metrics etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5738) Allow services to release/kill specific containers
Siddharth Seth created YARN-5738: Summary: Allow services to release/kill specific containers Key: YARN-5738 URL: https://issues.apache.org/jira/browse/YARN-5738 Project: Hadoop YARN Issue Type: Sub-task Reporter: Siddharth Seth There are occasions on which specific containers may not be required by a service. Would be useful to have support to return these to YARN. Slider flex doesn't give this control. cc [~gsaha], [~vinodkv] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5735) Make the service REST API use the app timeout feature YARN-4205
[ https://issues.apache.org/jira/browse/YARN-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15574295#comment-15574295 ] Jian He commented on YARN-5735: --- updated the patch to fix the issue > Make the service REST API use the app timeout feature YARN-4205 > --- > > Key: YARN-5735 > URL: https://issues.apache.org/jira/browse/YARN-5735 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Fix For: yarn-native-services > > Attachments: YARN-5735-yarn-native-services.001.patch, > YARN-5735-yarn-native-services.002.patch, YARN-5735.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5735) Make the service REST API use the app timeout feature YARN-4205
[ https://issues.apache.org/jira/browse/YARN-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-5735: -- Attachment: YARN-5735-yarn-native-services.002.patch > Make the service REST API use the app timeout feature YARN-4205 > --- > > Key: YARN-5735 > URL: https://issues.apache.org/jira/browse/YARN-5735 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Fix For: yarn-native-services > > Attachments: YARN-5735-yarn-native-services.001.patch, > YARN-5735-yarn-native-services.002.patch, YARN-5735.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy
[ https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15574188#comment-15574188 ] Hadoop QA commented on YARN-2009: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 55s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 59s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 70 new + 178 unchanged - 30 fixed = 248 total (was 208) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 5s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 19s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m 41s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 53m 26s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | | org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.IntraQueueCandidatesSelector$TAPriorityComparator implements Comparator but not Serializable At IntraQueueCandidatesSelector.java:Serializable At IntraQueueCandidatesSelector.java:[lines 43-53] | | | org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.IntraQueueCandidatesSelector$TAReverseComparator implements Comparator but not Serializable At IntraQueueCandidatesSelector.java:Serializable At IntraQueueCandidatesSelector.java:[lines 57-67] | | Failed junit tests | hadoop.yarn.server.resourcemanager.monitor.capacity.TestProportionalCapacityPreemptionPolicyIntraQueue | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833292/YARN-2009.0007.patch | | JIRA Issue | YARN-2009 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux e5cbaa1382e2 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | |
[jira] [Commented] (YARN-5717) Add tests for container-executor's is_feature_enabled function
[ https://issues.apache.org/jira/browse/YARN-5717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15574174#comment-15574174 ] Hudson commented on YARN-5717: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10608 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10608/]) YARN-5717. Add tests for container-executor is_feature_enabled. (cdouglas: rev cf3f43e95bf46030875137fc36da5c1fbe14250d) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c > Add tests for container-executor's is_feature_enabled function > -- > > Key: YARN-5717 > URL: https://issues.apache.org/jira/browse/YARN-5717 > Project: Hadoop YARN > Issue Type: Task > Components: yarn >Affects Versions: 2.8.0, 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Sidharta Seethana >Assignee: Sidharta Seethana > Fix For: 2.9.0, 3.0.0-alpha2 > > Attachments: YARN-5717.001.patch, YARN-5717.002.patch > > > YARN-5704 added functionality to disable certain features in > container-executor. Most of the changes cannot be tested via > container-executor - however, is_feature_enabled can be tested if it were > made public. (It is currently static). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy
[ https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-2009: -- Attachment: YARN-2009.0007.patch Yes [~eepayne]. Thats perfectly correct. Updated new patch where we can a clone of the Resources.min(..). > Priority support for preemption in ProportionalCapacityPreemptionPolicy > --- > > Key: YARN-2009 > URL: https://issues.apache.org/jira/browse/YARN-2009 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler >Reporter: Devaraj K >Assignee: Sunil G > Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, > YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, > YARN-2009.0006.patch, YARN-2009.0007.patch > > > While preempting containers based on the queue ideal assignment, we may need > to consider preempting the low priority application containers first. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5735) Make the service REST API use the app timeout feature YARN-4205
[ https://issues.apache.org/jira/browse/YARN-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573901#comment-15573901 ] Gour Saha commented on YARN-5735: - My bad. I thought it was ms too. Seems like the monitor interval is in ms. Thanks [~rohithsharma]. > Make the service REST API use the app timeout feature YARN-4205 > --- > > Key: YARN-5735 > URL: https://issues.apache.org/jira/browse/YARN-5735 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Fix For: yarn-native-services > > Attachments: YARN-5735-yarn-native-services.001.patch, > YARN-5735.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5735) Make the service REST API use the app timeout feature YARN-4205
[ https://issues.apache.org/jira/browse/YARN-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573887#comment-15573887 ] Jian He commented on YARN-5735: --- oh, thanks for checking.. I thought it's milliseconds.. will change it. > Make the service REST API use the app timeout feature YARN-4205 > --- > > Key: YARN-5735 > URL: https://issues.apache.org/jira/browse/YARN-5735 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Fix For: yarn-native-services > > Attachments: YARN-5735-yarn-native-services.001.patch, > YARN-5735.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5735) Make the service REST API use the app timeout feature YARN-4205
[ https://issues.apache.org/jira/browse/YARN-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573877#comment-15573877 ] Rohith Sharma K S commented on YARN-5735: - I just had quick walk through the patch, I see that {{appTimeout.put(ApplicationTimeoutType.LIFETIME, lifetime * 1000);}} lifetime value is multiplied by 1000. Is it intentional? Note that lifetime unit is seconds. > Make the service REST API use the app timeout feature YARN-4205 > --- > > Key: YARN-5735 > URL: https://issues.apache.org/jira/browse/YARN-5735 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Fix For: yarn-native-services > > Attachments: YARN-5735-yarn-native-services.001.patch, > YARN-5735.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573867#comment-15573867 ] Rohith Sharma K S commented on YARN-5699: - bq. the name DIAGNOSTICS_INFO_INFO doesn't sound quite right; just DIAGNOSTICS_INFO? make sense to me, I did not notice this. I will update this. bq. I'm a little confused by this test; it appears that we're preparing these entities as before (i.e. all this info is at the event level). Is that intended? Don't we want to reflect the same changes in this test too? I wonder how this test is passing then (or what it's testing even)? frankly, I haven not done any modification in this test classes *explicitly*. I did *MetricsConstants classes refactoring which resulted in changes in some of the test classes. I can look at these test class and if any minor modifications I can handle it in this patch itself, otherwise we might need to re visit whole test class again that could be done in separate JIRA. And also I notice that there is no specific test cases that are validating individual fields for entity that are published to ATSv2 in TestSystemMetricsPublisherForV2. Again it is bunch of test cases to be added to validate each fields. > Retrospect yarn entity fields which are publishing in events info fields. > - > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, > 0002-YARN-5699.YARN-5355.patch, 0002-YARN-5699.patch > > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-5718) TimelineClient (and other places in YARN) shouldn't over-write HDFS client retry settings which could cause unexpected behavior
[ https://issues.apache.org/jira/browse/YARN-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573792#comment-15573792 ] Xuan Gong edited comment on YARN-5718 at 10/14/16 1:38 AM: --- +1 LGTM. [~djp] But I am worried about the back-incompatibility issue. Could you file a separate ticket to have a discussion on branch-2 and branch-2.8 for this ticket, please ? In the mean time, I will commit the patch into trunk. was (Author: xgong): +1 LGTM. [~djp] But I am worried about the back-incompatibility issue. Could you file a separate ticket to have a discussion on branch-2 and branch-2.8 for this ticket, please ? At the mean time, I will commit the patch into trunk. > TimelineClient (and other places in YARN) shouldn't over-write HDFS client > retry settings which could cause unexpected behavior > --- > > Key: YARN-5718 > URL: https://issues.apache.org/jira/browse/YARN-5718 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager, timelineclient >Reporter: Junping Du >Assignee: Junping Du > Attachments: YARN-5718-v2.1.patch, YARN-5718-v2.patch, YARN-5718.patch > > > In one HA cluster, after NN failed over, we noticed that job is getting > failed as TimelineClient failed to retry connection to proper NN. This is > because we are overwrite hdfs client settings that hard code retry policy to > be enabled that conflict NN failed-over case - hdfs client should fail fast > so can retry on another NN. > We shouldn't assume any retry policy for hdfs client at all places in YARN. > This should keep consistent with HDFS settings that has different retry > polices in different deployment case. Thus, we should clean up these hard > code settings in YARN, include: FileSystemTimelineWriter, > FileSystemRMStateStore and FileSystemNodeLabelsStore. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-5718) TimelineClient (and other places in YARN) shouldn't over-write HDFS client retry settings which could cause unexpected behavior
[ https://issues.apache.org/jira/browse/YARN-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573792#comment-15573792 ] Xuan Gong edited comment on YARN-5718 at 10/14/16 1:39 AM: --- +1 LGTM. [~djp] But I am worried about the back-incompatibility issue. Could you file a separate ticket to have a discussion for branch-2 and branch-2.8 ,please ? In the mean time, I will commit the patch into trunk. was (Author: xgong): +1 LGTM. [~djp] But I am worried about the back-incompatibility issue. Could you file a separate ticket to have a discussion on branch-2 and branch-2.8 for this ticket, please ? In the mean time, I will commit the patch into trunk. > TimelineClient (and other places in YARN) shouldn't over-write HDFS client > retry settings which could cause unexpected behavior > --- > > Key: YARN-5718 > URL: https://issues.apache.org/jira/browse/YARN-5718 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager, timelineclient >Reporter: Junping Du >Assignee: Junping Du > Attachments: YARN-5718-v2.1.patch, YARN-5718-v2.patch, YARN-5718.patch > > > In one HA cluster, after NN failed over, we noticed that job is getting > failed as TimelineClient failed to retry connection to proper NN. This is > because we are overwrite hdfs client settings that hard code retry policy to > be enabled that conflict NN failed-over case - hdfs client should fail fast > so can retry on another NN. > We shouldn't assume any retry policy for hdfs client at all places in YARN. > This should keep consistent with HDFS settings that has different retry > polices in different deployment case. Thus, we should clean up these hard > code settings in YARN, include: FileSystemTimelineWriter, > FileSystemRMStateStore and FileSystemNodeLabelsStore. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5718) TimelineClient (and other places in YARN) shouldn't over-write HDFS client retry settings which could cause unexpected behavior
[ https://issues.apache.org/jira/browse/YARN-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-5718: Target Version/s: 3.0.0-alpha2 (was: 2.8.0, 3.0.0-alpha2) > TimelineClient (and other places in YARN) shouldn't over-write HDFS client > retry settings which could cause unexpected behavior > --- > > Key: YARN-5718 > URL: https://issues.apache.org/jira/browse/YARN-5718 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager, timelineclient >Reporter: Junping Du >Assignee: Junping Du > Attachments: YARN-5718-v2.1.patch, YARN-5718-v2.patch, YARN-5718.patch > > > In one HA cluster, after NN failed over, we noticed that job is getting > failed as TimelineClient failed to retry connection to proper NN. This is > because we are overwrite hdfs client settings that hard code retry policy to > be enabled that conflict NN failed-over case - hdfs client should fail fast > so can retry on another NN. > We shouldn't assume any retry policy for hdfs client at all places in YARN. > This should keep consistent with HDFS settings that has different retry > polices in different deployment case. Thus, we should clean up these hard > code settings in YARN, include: FileSystemTimelineWriter, > FileSystemRMStateStore and FileSystemNodeLabelsStore. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5718) TimelineClient (and other places in YARN) shouldn't over-write HDFS client retry settings which could cause unexpected behavior
[ https://issues.apache.org/jira/browse/YARN-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573792#comment-15573792 ] Xuan Gong commented on YARN-5718: - +1 LGTM. [~djp] But I am worried about the back-incompatibility issue. Could you file a separate ticket to have a discussion on branch-2 and branch-2.8 for this ticket, please ? At the mean time, I will commit the patch into trunk. > TimelineClient (and other places in YARN) shouldn't over-write HDFS client > retry settings which could cause unexpected behavior > --- > > Key: YARN-5718 > URL: https://issues.apache.org/jira/browse/YARN-5718 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager, timelineclient >Reporter: Junping Du >Assignee: Junping Du > Attachments: YARN-5718-v2.1.patch, YARN-5718-v2.patch, YARN-5718.patch > > > In one HA cluster, after NN failed over, we noticed that job is getting > failed as TimelineClient failed to retry connection to proper NN. This is > because we are overwrite hdfs client settings that hard code retry policy to > be enabled that conflict NN failed-over case - hdfs client should fail fast > so can retry on another NN. > We shouldn't assume any retry policy for hdfs client at all places in YARN. > This should keep consistent with HDFS settings that has different retry > polices in different deployment case. Thus, we should clean up these hard > code settings in YARN, include: FileSystemTimelineWriter, > FileSystemRMStateStore and FileSystemNodeLabelsStore. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5735) Make the service REST API use the app timeout feature YARN-4205
[ https://issues.apache.org/jira/browse/YARN-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573754#comment-15573754 ] Hadoop QA commented on YARN-5735: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 49s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 1s {color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s {color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s {color} | {color:green} yarn-native-services passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 10s {color} | {color:red} hadoop-yarn-services-api in yarn-native-services failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 27s {color} | {color:green} yarn-native-services passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 55s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core in yarn-native-services has 317 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 26s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api in yarn-native-services has 14 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 27s {color} | {color:red} hadoop-yarn-slider-core in yarn-native-services failed. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications: The patch generated 4 new + 443 unchanged - 1 fixed = 447 total (was 444) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 7s {color} | {color:red} hadoop-yarn-services-api in the patch failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 30s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 24s {color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 19s {color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s {color} | {color:green} hadoop-yarn-services-api in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 18s {color} | {color:red} The patch generated 10 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 19m 33s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | slider.core.registry.docstore.TestPublishedConfigurationOutputter | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833260/YARN-5735-yarn-native-services.001.patch | | JIRA Issue | YARN-5735 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit
[jira] [Commented] (YARN-5325) Stateless ARMRMProxy policies implementation
[ https://issues.apache.org/jira/browse/YARN-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573711#comment-15573711 ] Hadoop QA commented on YARN-5325: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s {color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 0s {color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 9 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 57s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 45s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 11s {color} | {color:green} The patch generated 0 new + 74 unchanged - 1 fixed = 74 total (was 75) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s {color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 28s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833257/YARN-5325-YARN-2915.14.patch | | JIRA Issue | YARN-5325 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle shellcheck shelldocs | | uname | Linux f8f5ab23e989 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-2915 / 0bf6bbb | | Default Java | 1.8.0_101 | | shellcheck | v0.4.4 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/13388/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/13388/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Stateless ARMRMProxy policies implementation > > > Key: YARN-5325 >
[jira] [Commented] (YARN-5690) Integrate native services modules into maven build
[ https://issues.apache.org/jira/browse/YARN-5690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573707#comment-15573707 ] Hadoop QA commented on YARN-5690: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m 9s {color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 0s {color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 54s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 24s {color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 3s {color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 55s {color} | {color:green} yarn-native-services passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 3m 17s {color} | {color:red} hadoop-yarn in yarn-native-services failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 22s {color} | {color:green} yarn-native-services passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-assemblies hadoop-yarn-project/hadoop-yarn {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 57s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core in yarn-native-services has 317 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 58s {color} | {color:red} hadoop-yarn in yarn-native-services failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 43s {color} | {color:red} hadoop-yarn-slider-core in yarn-native-services failed. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 37s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 44s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 44s {color} | {color:red} root: The patch generated 1 new + 581 unchanged - 1 fixed = 582 total (was 582) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 3m 14s {color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 12s {color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 5s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-assemblies hadoop-yarn-project/hadoop-yarn {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 42s {color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 30s {color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s {color} | {color:green} hadoop-project in the patch passed. {color} | |
[jira] [Created] (YARN-5736) YARN container executor config does not handle white space
Miklos Szegedi created YARN-5736: Summary: YARN container executor config does not handle white space Key: YARN-5736 URL: https://issues.apache.org/jira/browse/YARN-5736 Project: Hadoop YARN Issue Type: Bug Reporter: Miklos Szegedi Assignee: Miklos Szegedi Priority: Trivial The container executor configuration reader does not handle white spaces or malformed key value pairs in the config file correctly or gracefully as an example the following key value line which is part of the configuration (note the << is used as a marker to show the extra trailing space): yarn.nodemanager.linux-container-executor.group=yarn << is a valid line but when you run the check over the file: [root@test]#./container-executor --checksetup Can't get group information for yarn - Success. [root@test]# It fails to find the yarn group but it really tries to find the "yarn " group which fails. There is no trimming anywhere while processing the lines. If a space would be added in before or after the = sign a failure would also occur. Minor nit is the fact that a failure still is logged as a Success -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5638) Introduce a collector timestamp to uniquely identify collectors creation order in collector discovery
[ https://issues.apache.org/jira/browse/YARN-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573687#comment-15573687 ] Sangjin Lee commented on YARN-5638: --- The latest patch LGTM. I'll wait until tomorrow morning PDT to give folks a chance to comment on the latest patch before I commit. > Introduce a collector timestamp to uniquely identify collectors creation > order in collector discovery > - > > Key: YARN-5638 > URL: https://issues.apache.org/jira/browse/YARN-5638 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Li Lu >Assignee: Li Lu > Attachments: YARN-5638-YARN-5355.v4.patch, > YARN-5638-YARN-5355.v5.patch, YARN-5638-trunk.v1.patch, > YARN-5638-trunk.v2.patch, YARN-5638-trunk.v3.patch > > > As discussed in YARN-3359, we need to further identify timeline collectors' > creation order to rebuild collector discovery data in the RM. This JIRA > proposes to useto order collectors > for each application in the RM. This timestamp can then be used when a > standby RM becomes active and rebuild collector discovery data. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5325) Stateless ARMRMProxy policies implementation
[ https://issues.apache.org/jira/browse/YARN-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated YARN-5325: - Attachment: YARN-5325-YARN-2915.14.patch Missed one checkstyle issue, so updated patch that fixes it. > Stateless ARMRMProxy policies implementation > > > Key: YARN-5325 > URL: https://issues.apache.org/jira/browse/YARN-5325 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Affects Versions: YARN-2915 >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-5325-YARN-2915.05.patch, > YARN-5325-YARN-2915.06.patch, YARN-5325-YARN-2915.07.patch, > YARN-5325-YARN-2915.08.patch, YARN-5325-YARN-2915.09.patch, > YARN-5325-YARN-2915.10.patch, YARN-5325-YARN-2915.11.patch, > YARN-5325-YARN-2915.12.patch, YARN-5325-YARN-2915.13.patch, > YARN-5325-YARN-2915.14.patch, YARN-5325.01.patch, YARN-5325.02.patch, > YARN-5325.03.patch, YARN-5325.04.patch > > > This JIRA tracks policies in the AMRMProxy that decide how to forward > ResourceRequests, without maintaining substantial state across decissions > (e.g., broadcast). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5734) OrgQueue for easy CapacityScheduler queue configuration management
[ https://issues.apache.org/jira/browse/YARN-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573674#comment-15573674 ] Min Shen commented on YARN-5734: [~curino], [~subru], As discussed offline, could you please provide feedbacks on the design docs we currently have? > OrgQueue for easy CapacityScheduler queue configuration management > -- > > Key: YARN-5734 > URL: https://issues.apache.org/jira/browse/YARN-5734 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Min Shen >Assignee: Min Shen > Attachments: OrgQueue_Design_v0.pdf > > > The current xml based configuration mechanism in CapacityScheduler makes it > very inconvenient to apply any changes to the queue configurations. We saw 2 > main drawbacks in the file based configuration mechanism: > # This makes it very inconvenient to automate queue configuration updates. > For example, in our cluster setup, we leverage the queue mapping feature from > YARN-2411 to route users to their dedicated organization queues. It could be > extremely cumbersome to keep updating the config file to manage the very > dynamic mapping between users to organizations. > # Even a user has the admin permission on one specific queue, that user is > unable to make any queue configuration changes to resize the subqueues, > changing queue ACLs, or creating new queues. All these operations need to be > performed in a centralized manner by the cluster administrators. > With these current limitations, we realized the need of a more flexible > configuration mechanism that allows queue configurations to be stored and > managed more dynamically. We developed the feature internally at LinkedIn > which introduces the concept of MutableConfigurationProvider. What it > essentially does is to provide a set of configuration mutation APIs that > allows queue configurations to be updated externally with a set of REST APIs. > When performing the queue configuration changes, the queue ACLs will be > honored, which means only queue administrators can make configuration changes > to a given queue. MutableConfigurationProvider is implemented as a pluggable > interface, and we have one implementation of this interface which is based on > Derby embedded database. > This feature has been deployed at LinkedIn's Hadoop cluster for a year now, > and have gone through several iterations of gathering feedbacks from users > and improving accordingly. With this feature, cluster administrators are able > to automate lots of thequeue configuration management tasks, such as setting > the queue capacities to adjust cluster resources between queues based on > established resource consumption patterns, or managing updating the user to > queue mappings. We have attached our design documentation with this ticket > and would like to receive feedbacks from the community regarding how to best > integrate it with the latest version of YARN. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5735) Make the service REST API use the app timeout feature YARN-4205
[ https://issues.apache.org/jira/browse/YARN-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573678#comment-15573678 ] Gour Saha commented on YARN-5735: - [~jianhe] I reviewed the patch and it looks good. Can you rename the patch file with branch name yarn-native-services as the suffix so that Hadoop QA does not try to apply it to trunk? > Make the service REST API use the app timeout feature YARN-4205 > --- > > Key: YARN-5735 > URL: https://issues.apache.org/jira/browse/YARN-5735 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Fix For: yarn-native-services > > Attachments: YARN-5735.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5735) Make the service REST API use the app timeout feature YARN-4205
[ https://issues.apache.org/jira/browse/YARN-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gour Saha updated YARN-5735: Fix Version/s: yarn-native-services > Make the service REST API use the app timeout feature YARN-4205 > --- > > Key: YARN-5735 > URL: https://issues.apache.org/jira/browse/YARN-5735 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Fix For: yarn-native-services > > Attachments: YARN-5735.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573656#comment-15573656 ] Sangjin Lee commented on YARN-5699: --- Thanks for the update patch [~rohithsharma]! I think it's almost there. A few comments. (AppAttemptMetricsConstants.java) - l.55: the name {{DIAGNOSTICS_INFO_INFO}} doesn't sound quite right; just {{DIAGNOSTICS_INFO}}? (ContainerMetricsConstants.java) - l.64: same as above (TestApplicationHistoryManagerOnTimelineStore.java) - I'm a little confused by this test; it appears that we're preparing these entities as before (i.e. all this info is at the event level). Is that intended? Don't we want to reflect the same changes in this test too? I wonder how this test is passing then (or what it's testing even)? (NMTimelinePublisher.java) - l.207: I suspect it might be the same, but just to be explicit, should we do {{toString()}}? > Retrospect yarn entity fields which are publishing in events info fields. > - > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, > 0002-YARN-5699.YARN-5355.patch, 0002-YARN-5699.patch > > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5735) Make the service REST API use the app timeout feature YARN-4205
[ https://issues.apache.org/jira/browse/YARN-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573628#comment-15573628 ] Hadoop QA commented on YARN-5735: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s {color} | {color:red} YARN-5735 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833247/YARN-5735.1.patch | | JIRA Issue | YARN-5735 | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/13387/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Make the service REST API use the app timeout feature YARN-4205 > --- > > Key: YARN-5735 > URL: https://issues.apache.org/jira/browse/YARN-5735 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Attachments: YARN-5735.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5325) Stateless ARMRMProxy policies implementation
[ https://issues.apache.org/jira/browse/YARN-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573617#comment-15573617 ] Hadoop QA commented on YARN-5325: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s {color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 1s {color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 9 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 31s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 54s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 13s {color} | {color:green} The patch generated 0 new + 74 unchanged - 1 fixed = 74 total (was 75) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s {color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 17m 11s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833241/YARN-5325-YARN-2915.13.patch | | JIRA Issue | YARN-5325 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle shellcheck shelldocs | | uname | Linux ffe2f11c8184 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-2915 / 0bf6bbb | | Default Java | 1.8.0_101 | | shellcheck | v0.4.4 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/13386/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/13386/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common | | Console output |
[jira] [Updated] (YARN-5735) Make the service REST API use the app timeout feature YARN-4205
[ https://issues.apache.org/jira/browse/YARN-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-5735: -- Attachment: YARN-5735.1.patch > Make the service REST API use the app timeout feature YARN-4205 > --- > > Key: YARN-5735 > URL: https://issues.apache.org/jira/browse/YARN-5735 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Attachments: YARN-5735.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5724) [Umbrella] Better Queue Management in YARN
[ https://issues.apache.org/jira/browse/YARN-5724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573601#comment-15573601 ] Jonathan Hung commented on YARN-5724: - Hi [~xgong], we have a ticket open here: YARN-5734 for a similar functionality. Please take a look and let us know what you think! > [Umbrella] Better Queue Management in YARN > -- > > Key: YARN-5724 > URL: https://issues.apache.org/jira/browse/YARN-5724 > Project: Hadoop YARN > Issue Type: Task >Reporter: Xuan Gong >Assignee: Xuan Gong > > This serves as an umbrella ticket for tasks related to better queue > management in YARN. > Today's the only way to manage the queue is through admins editing > configuration files and then issuing a refresh command. This will bring many > inconveniences. For example, the users can not create / delete /modify their > own queues without talking to site level admins. > Even in today's approach (configuration-based), we still have several places > needed to improve: > * It is possible today to add or modify queues without restarting the RM, > via a CS refresh. But for deleting queue, we have to restart the > resourcemanager. > * When a queue is STOPPED, resources allocated to the queue can be handled > better. Currently, they'll only be used if the other queues are setup to go > over their capacity. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5734) OrgQueue for easy CapacityScheduler queue configuration management
[ https://issues.apache.org/jira/browse/YARN-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hung updated YARN-5734: Attachment: OrgQueue_Design_v0.pdf > OrgQueue for easy CapacityScheduler queue configuration management > -- > > Key: YARN-5734 > URL: https://issues.apache.org/jira/browse/YARN-5734 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Min Shen >Assignee: Min Shen > Attachments: OrgQueue_Design_v0.pdf > > > The current xml based configuration mechanism in CapacityScheduler makes it > very inconvenient to apply any changes to the queue configurations. We saw 2 > main drawbacks in the file based configuration mechanism: > # This makes it very inconvenient to automate queue configuration updates. > For example, in our cluster setup, we leverage the queue mapping feature from > YARN-2411 to route users to their dedicated organization queues. It could be > extremely cumbersome to keep updating the config file to manage the very > dynamic mapping between users to organizations. > # Even a user has the admin permission on one specific queue, that user is > unable to make any queue configuration changes to resize the subqueues, > changing queue ACLs, or creating new queues. All these operations need to be > performed in a centralized manner by the cluster administrators. > With these current limitations, we realized the need of a more flexible > configuration mechanism that allows queue configurations to be stored and > managed more dynamically. We developed the feature internally at LinkedIn > which introduces the concept of MutableConfigurationProvider. What it > essentially does is to provide a set of configuration mutation APIs that > allows queue configurations to be updated externally with a set of REST APIs. > When performing the queue configuration changes, the queue ACLs will be > honored, which means only queue administrators can make configuration changes > to a given queue. MutableConfigurationProvider is implemented as a pluggable > interface, and we have one implementation of this interface which is based on > Derby embedded database. > This feature has been deployed at LinkedIn's Hadoop cluster for a year now, > and have gone through several iterations of gathering feedbacks from users > and improving accordingly. With this feature, cluster administrators are able > to automate lots of thequeue configuration management tasks, such as setting > the queue capacities to adjust cluster resources between queues based on > established resource consumption patterns, or managing updating the user to > queue mappings. We have attached our design documentation with this ticket > and would like to receive feedbacks from the community regarding how to best > integrate it with the latest version of YARN. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5735) Make the service REST API use the app timeout feature YARN-4205
[ https://issues.apache.org/jira/browse/YARN-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-5735: -- Summary: Make the service REST API use the app timeout feature YARN-4205 (was: Make the service REST API use the timeout feature YARN-4205) > Make the service REST API use the app timeout feature YARN-4205 > --- > > Key: YARN-5735 > URL: https://issues.apache.org/jira/browse/YARN-5735 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5735) Make the service REST API use the timeout feature YARN-4205
Jian He created YARN-5735: - Summary: Make the service REST API use the timeout feature YARN-4205 Key: YARN-5735 URL: https://issues.apache.org/jira/browse/YARN-5735 Project: Hadoop YARN Issue Type: Sub-task Reporter: Jian He Assignee: Jian He -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5734) OrgQueue for easy CapacityScheduler queue configuration management
Min Shen created YARN-5734: -- Summary: OrgQueue for easy CapacityScheduler queue configuration management Key: YARN-5734 URL: https://issues.apache.org/jira/browse/YARN-5734 Project: Hadoop YARN Issue Type: New Feature Reporter: Min Shen Assignee: Min Shen The current xml based configuration mechanism in CapacityScheduler makes it very inconvenient to apply any changes to the queue configurations. We saw 2 main drawbacks in the file based configuration mechanism: # This makes it very inconvenient to automate queue configuration updates. For example, in our cluster setup, we leverage the queue mapping feature from YARN-2411 to route users to their dedicated organization queues. It could be extremely cumbersome to keep updating the config file to manage the very dynamic mapping between users to organizations. # Even a user has the admin permission on one specific queue, that user is unable to make any queue configuration changes to resize the subqueues, changing queue ACLs, or creating new queues. All these operations need to be performed in a centralized manner by the cluster administrators. With these current limitations, we realized the need of a more flexible configuration mechanism that allows queue configurations to be stored and managed more dynamically. We developed the feature internally at LinkedIn which introduces the concept of MutableConfigurationProvider. What it essentially does is to provide a set of configuration mutation APIs that allows queue configurations to be updated externally with a set of REST APIs. When performing the queue configuration changes, the queue ACLs will be honored, which means only queue administrators can make configuration changes to a given queue. MutableConfigurationProvider is implemented as a pluggable interface, and we have one implementation of this interface which is based on Derby embedded database. This feature has been deployed at LinkedIn's Hadoop cluster for a year now, and have gone through several iterations of gathering feedbacks from users and improving accordingly. With this feature, cluster administrators are able to automate lots of thequeue configuration management tasks, such as setting the queue capacities to adjust cluster resources between queues based on established resource consumption patterns, or managing updating the user to queue mappings. We have attached our design documentation with this ticket and would like to receive feedbacks from the community regarding how to best integrate it with the latest version of YARN. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5325) Stateless ARMRMProxy policies implementation
[ https://issues.apache.org/jira/browse/YARN-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated YARN-5325: - Attachment: YARN-5325-YARN-2915.13.patch Attaching patch that fixes checkstyle and formatting for Yetus check before committing. > Stateless ARMRMProxy policies implementation > > > Key: YARN-5325 > URL: https://issues.apache.org/jira/browse/YARN-5325 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Affects Versions: YARN-2915 >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-5325-YARN-2915.05.patch, > YARN-5325-YARN-2915.06.patch, YARN-5325-YARN-2915.07.patch, > YARN-5325-YARN-2915.08.patch, YARN-5325-YARN-2915.09.patch, > YARN-5325-YARN-2915.10.patch, YARN-5325-YARN-2915.11.patch, > YARN-5325-YARN-2915.12.patch, YARN-5325-YARN-2915.13.patch, > YARN-5325.01.patch, YARN-5325.02.patch, YARN-5325.03.patch, YARN-5325.04.patch > > > This JIRA tracks policies in the AMRMProxy that decide how to forward > ResourceRequests, without maintaining substantial state across decissions > (e.g., broadcast). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Resolved] (YARN-5732) Run auxiliary services in system containers
[ https://issues.apache.org/jira/browse/YARN-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen resolved YARN-5732. -- Resolution: Duplicate > Run auxiliary services in system containers > --- > > Key: YARN-5732 > URL: https://issues.apache.org/jira/browse/YARN-5732 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Haibo Chen >Assignee: Haibo Chen > > Auxiliary services today are run within the same node manager process. This > is undesirable because issues within auxiliary services can take down the > whole node manager process. To have better isolation, we can launch auxiliary > services in system containers, which is a concept that we don't have in YARN > today. As a bonus point, we have monitor the resource usage of auxiliary > services if they run in separate containers. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5732) Run auxiliary services in system containers
[ https://issues.apache.org/jira/browse/YARN-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573536#comment-15573536 ] Haibo Chen commented on YARN-5732: -- Thanks [~jlowe] for pointing it out. I will close this jira a duplicate and revive YARN-1593. > Run auxiliary services in system containers > --- > > Key: YARN-5732 > URL: https://issues.apache.org/jira/browse/YARN-5732 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Haibo Chen >Assignee: Haibo Chen > > Auxiliary services today are run within the same node manager process. This > is undesirable because issues within auxiliary services can take down the > whole node manager process. To have better isolation, we can launch auxiliary > services in system containers, which is a concept that we don't have in YARN > today. As a bonus point, we have monitor the resource usage of auxiliary > services if they run in separate containers. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-1593) support out-of-proc AuxiliaryServices
[ https://issues.apache.org/jira/browse/YARN-1593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573534#comment-15573534 ] Haibo Chen commented on YARN-1593: -- Hi, [~djp]. Are you actively working on this? As part of the ATS v2 effort, I have been recently looking at this issue. If you have not started working on this, mind if I take it over? > support out-of-proc AuxiliaryServices > - > > Key: YARN-1593 > URL: https://issues.apache.org/jira/browse/YARN-1593 > Project: Hadoop YARN > Issue Type: Improvement > Components: nodemanager, rolling upgrade >Reporter: Ming Ma >Assignee: Junping Du > > AuxiliaryServices such as ShuffleHandler currently run in the same process as > NM. There are some benefits to host them in dedicated processes. > 1. NM rolling restart. If we want to upgrade YARN , NM restart will force the > ShuffleHandler restart. If ShuffleHandler runs as a separate process, > ShuffleHandler can continue to run during NM restart. NM can reconnect the > the running ShuffleHandler after restart. > 2. Resource management. It is possible another type of AuxiliaryServices will > be implemented. AuxiliaryServices are considered YARN application specific > and could consume lots of resources. Running AuxiliaryServices in separate > processes allow easier resource management. NM could potentially stop a > specific AuxiliaryServices process from running if it consumes resource way > above its allocation. > Here are some high level ideas: > 1. NM provides a hosting process for each AuxiliaryService. Existing > AuxiliaryService API doesn't change. > 2. The hosting process provides RPC server for AuxiliaryService proxy object > inside NM to connect to. > 3. When we rolling restart NM, the existing AuxiliaryService processes will > continue to run. NM could reconnect to the running AuxiliaryService processes > upon restart. > 4. Policy and resource management of AuxiliaryServices. So far we don't have > immediate need for this. AuxiliaryService could run inside a container and > its resource utilization could be taken into account by RM and RM could > consider a specific type of applications overutilize cluster resource. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4779) Fix AM container allocation logic in SLS
[ https://issues.apache.org/jira/browse/YARN-4779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573499#comment-15573499 ] Hadoop QA commented on YARN-4779: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 21s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 27s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s {color} | {color:red} hadoop-tools/hadoop-sls: The patch generated 13 new + 75 unchanged - 9 fixed = 88 total (was 84) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 33s {color} | {color:red} hadoop-tools/hadoop-sls generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s {color} | {color:green} hadoop-tools_hadoop-sls generated 0 new + 20 unchanged - 3 fixed = 20 total (was 23) {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s {color} | {color:green} hadoop-sls in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 33m 18s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-tools/hadoop-sls | | | Null passed for non-null parameter of AMSimulator.notifyAMContainerLaunched(Container) in org.apache.hadoop.yarn.sls.appmaster.MRAMSimulator.notifyAMContainerLaunched(Container) Method invoked at MRAMSimulator.java:of AMSimulator.notifyAMContainerLaunched(Container) in org.apache.hadoop.yarn.sls.appmaster.MRAMSimulator.notifyAMContainerLaunched(Container) Method invoked at MRAMSimulator.java:[line 152] | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833225/YARN-4779.4.patch | | JIRA Issue | YARN-4779 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux d5cf1b2b9579 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0a85d07 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/13384/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-sls.txt | | findbugs |
[jira] [Updated] (YARN-5690) Integrate native services modules into maven build
[ https://issues.apache.org/jira/browse/YARN-5690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Billie Rinaldi updated YARN-5690: - Attachment: YARN-5690-yarn-native-services.002.patch Here's a new patch that wraps the slider.libdir option and removes the two NodeManager-specific classes from the slider AM log4j properties. > Integrate native services modules into maven build > -- > > Key: YARN-5690 > URL: https://issues.apache.org/jira/browse/YARN-5690 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Billie Rinaldi > Attachments: YARN-5690-yarn-native-services.001.patch, > YARN-5690-yarn-native-services.002.patch > > > The yarn dist assembly should include jars for the new modules as well as > their new dependencies. We may want to create new lib directories in the > tarball for the dependencies of the slider-core and services API modules, to > avoid adding these dependencies into the general YARN classpath. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5638) Introduce a collector timestamp to uniquely identify collectors creation order in collector discovery
[ https://issues.apache.org/jira/browse/YARN-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573434#comment-15573434 ] Hadoop QA commented on YARN-5638: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 51s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 46s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 43s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 13s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s {color} | {color:green} YARN-5355 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 14s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 27s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 2 new + 393 unchanged - 10 fixed = 395 total (was 403) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 3s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 58s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s {color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 2s {color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 29s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 90m 23s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833203/YARN-5638-YARN-5355.v5.patch | | JIRA Issue | YARN-5638 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux fedc768057cb 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-5355 / 5d7ad39 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | |
[jira] [Updated] (YARN-4779) Fix AM container allocation logic in SLS
[ https://issues.apache.org/jira/browse/YARN-4779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-4779: - Attachment: YARN-4779.4.patch Attached ver.4 patch. > Fix AM container allocation logic in SLS > > > Key: YARN-4779 > URL: https://issues.apache.org/jira/browse/YARN-4779 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan > Attachments: YARN-4779.1.patch, YARN-4779.2.patch, YARN-4779.3.patch, > YARN-4779.4.patch > > > Currently, SLS uses unmanaged AM for simulated map-reduce applications. And > first allocated container for each app is considered to be the master > container. > This could be problematic when preemption happens. CapacityScheduler preempt > AM container at lowest priority, but the simulated AM container isn't > recognized by scheduler -- it is a normal container from scheduler's > perspective. > This JIRA tries to fix this logic: do the real AM allocation instead of using > unmanaged AM. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5729) Bug fixes identified during testing
[ https://issues.apache.org/jira/browse/YARN-5729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gour Saha updated YARN-5729: Attachment: YARN-5729-yarn-native-services.002.patch > Bug fixes identified during testing > --- > > Key: YARN-5729 > URL: https://issues.apache.org/jira/browse/YARN-5729 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Gour Saha > Fix For: yarn-native-services > > Attachments: YARN-5729-yarn-native-services.001.patch, > YARN-5729-yarn-native-services.002.patch > > > Use this to apply bug fixes identified during testing. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-5706) Fail to launch SLSRunner due to NPE
[ https://issues.apache.org/jira/browse/YARN-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573375#comment-15573375 ] Wangda Tan edited comment on YARN-5706 at 10/13/16 10:16 PM: - +1. thanks [~lewuathe], will commit tomorrow if no opposite opinions. was (Author: leftnoteasy): +1 > Fail to launch SLSRunner due to NPE > --- > > Key: YARN-5706 > URL: https://issues.apache.org/jira/browse/YARN-5706 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: YARN-5706.01.patch, YARN-5706.02.patch > > > {code} > java.lang.NullPointerException > at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:88) > at > org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.initMetrics(SLSCapacityScheduler.java:459) > at > org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.setConf(SLSCapacityScheduler.java:153) > at > org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76) > {code} > CLASSPATH for html resource is not configured properly. > {code} > DEBUG: Injecting share/hadoop/tools/sls/html into CLASSPATH > DEBUG: Rejected CLASSPATH: share/hadoop/tools/sls/html (does not exist) > {code} > This issue can be reproduced when doing according to the documentation > instruction. > http://hadoop.apache.org/docs/current/hadoop-sls/SchedulerLoadSimulator.html > {code} > $ cd $HADOOP_ROOT/share/hadoop/tools/sls > $ bin/slsrun.sh > --input-rumen |--input-sls=> --output-dir= [--nodes=] > [--track-jobs= ] [--print-simulation] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5732) Run auxiliary services in system containers
[ https://issues.apache.org/jira/browse/YARN-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573374#comment-15573374 ] Jason Lowe commented on YARN-5732: -- Looks like a duplicate of YARN-1593. > Run auxiliary services in system containers > --- > > Key: YARN-5732 > URL: https://issues.apache.org/jira/browse/YARN-5732 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Haibo Chen >Assignee: Haibo Chen > > Auxiliary services today are run within the same node manager process. This > is undesirable because issues within auxiliary services can take down the > whole node manager process. To have better isolation, we can launch auxiliary > services in system containers, which is a concept that we don't have in YARN > today. As a bonus point, we have monitor the resource usage of auxiliary > services if they run in separate containers. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5706) Fail to launch SLSRunner due to NPE
[ https://issues.apache.org/jira/browse/YARN-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573375#comment-15573375 ] Wangda Tan commented on YARN-5706: -- +1 > Fail to launch SLSRunner due to NPE > --- > > Key: YARN-5706 > URL: https://issues.apache.org/jira/browse/YARN-5706 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: YARN-5706.01.patch, YARN-5706.02.patch > > > {code} > java.lang.NullPointerException > at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:88) > at > org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.initMetrics(SLSCapacityScheduler.java:459) > at > org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.setConf(SLSCapacityScheduler.java:153) > at > org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76) > {code} > CLASSPATH for html resource is not configured properly. > {code} > DEBUG: Injecting share/hadoop/tools/sls/html into CLASSPATH > DEBUG: Rejected CLASSPATH: share/hadoop/tools/sls/html (does not exist) > {code} > This issue can be reproduced when doing according to the documentation > instruction. > http://hadoop.apache.org/docs/current/hadoop-sls/SchedulerLoadSimulator.html > {code} > $ cd $HADOOP_ROOT/share/hadoop/tools/sls > $ bin/slsrun.sh > --input-rumen |--input-sls=> --output-dir= [--nodes=] > [--track-jobs= ] [--print-simulation] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5729) Bug fixes identified during testing
[ https://issues.apache.org/jira/browse/YARN-5729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573365#comment-15573365 ] Gour Saha commented on YARN-5729: - [~jianhe] thank you for reviewing the patch. {quote} why is this needed ? {noformat} this.launchTime = (Date) launchTime.clone(); {noformat} {quote} This is to fix the following findbug issues reported by Hadoop QA - https://builds.apache.org/job/PreCommit-YARN-Build/13374/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services-api-warnings.html#Warnings_MALICIOUS_CODE {quote} Does below code need to use the new "getDefaultComponentAsList(Application app)" method as well ? {noformat} if (updateAppData.getNumberOfContainers() != null && updateAppData.getComponents() == null) { updateAppData.setComponents(getDefaultComponentAsList()); } {noformat} {quote} It does not, since for app update (flex in this case) we use only the container count (and don't need artifact, resource and launch cmd). If you see the method _*flexSliderApplication*_ you will see that it checks if component-level container count is null, in which case it uses the app-level count. However, I ended up introducing 2 new findbug errors which I am going to fix and upload a new 002 patch. > Bug fixes identified during testing > --- > > Key: YARN-5729 > URL: https://issues.apache.org/jira/browse/YARN-5729 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Gour Saha > Fix For: yarn-native-services > > Attachments: YARN-5729-yarn-native-services.001.patch > > > Use this to apply bug fixes identified during testing. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses.
[ https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573342#comment-15573342 ] Hadoop QA commented on YARN-4849: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 32s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 46s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s {color} | {color:green} YARN-3368 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 7s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 7s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s {color} | {color:green} hadoop-yarn-ui in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 27m 36s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:b17 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833207/YARN-4849-YARN-3368.addendum.4.patch | | JIRA Issue | YARN-4849 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 9acd83ec58e1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-3368 / 60c8810 | | Default Java | 1.8.0_101 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/13383/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/13383/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix > licenses. > --- > > Key: YARN-4849 > URL: https://issues.apache.org/jira/browse/YARN-4849 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan > Fix For: YARN-3368 > > Attachments: YARN-4849-YARN-3368.1.patch, > YARN-4849-YARN-3368.2.patch, YARN-4849-YARN-3368.3.patch, > YARN-4849-YARN-3368.4.patch, YARN-4849-YARN-3368.5.patch, > YARN-4849-YARN-3368.6.patch, YARN-4849-YARN-3368.7.patch, > YARN-4849-YARN-3368.8.patch, YARN-4849-YARN-3368.addendum.1.patch, > YARN-4849-YARN-3368.addendum.2.patch, YARN-4849-YARN-3368.addendum.3.patch, >
[jira] [Updated] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses.
[ https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-4849: - Attachment: YARN-4849-YARN-3368.addendum.4.patch Attached addendum.4 patch, removed un existed files from rat exclusion, and removed editorconfig file. > [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix > licenses. > --- > > Key: YARN-4849 > URL: https://issues.apache.org/jira/browse/YARN-4849 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan > Fix For: YARN-3368 > > Attachments: YARN-4849-YARN-3368.1.patch, > YARN-4849-YARN-3368.2.patch, YARN-4849-YARN-3368.3.patch, > YARN-4849-YARN-3368.4.patch, YARN-4849-YARN-3368.5.patch, > YARN-4849-YARN-3368.6.patch, YARN-4849-YARN-3368.7.patch, > YARN-4849-YARN-3368.8.patch, YARN-4849-YARN-3368.addendum.1.patch, > YARN-4849-YARN-3368.addendum.2.patch, YARN-4849-YARN-3368.addendum.3.patch, > YARN-4849-YARN-3368.addendum.4.patch, > YARN-4849-YARN-3368.doc-fix-08172016.1.patch, > YARN-4849-YARN-3368.doc-fix-08232016.1.patch, > YARN-4849-YARN-3368.javadoc-fix-09082016.1.patch, > YARN-4849-YARN-3368.javadoc-fix-09082016.2.patch, > YARN-4849-YARN-3368.javadoc-fix-09082016.3.patch, > YARN-4849-YARN-3368.license-fix-08172016.1.patch, > YARN-4849-YARN-3368.license-fix-08232016.1.patch, > YARN-4849-YARN-3368.rat-fix-08302016.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Reopened] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses.
[ https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan reopened YARN-4849: -- > [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix > licenses. > --- > > Key: YARN-4849 > URL: https://issues.apache.org/jira/browse/YARN-4849 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan > Fix For: YARN-3368 > > Attachments: YARN-4849-YARN-3368.1.patch, > YARN-4849-YARN-3368.2.patch, YARN-4849-YARN-3368.3.patch, > YARN-4849-YARN-3368.4.patch, YARN-4849-YARN-3368.5.patch, > YARN-4849-YARN-3368.6.patch, YARN-4849-YARN-3368.7.patch, > YARN-4849-YARN-3368.8.patch, YARN-4849-YARN-3368.addendum.1.patch, > YARN-4849-YARN-3368.addendum.2.patch, YARN-4849-YARN-3368.addendum.3.patch, > YARN-4849-YARN-3368.doc-fix-08172016.1.patch, > YARN-4849-YARN-3368.doc-fix-08232016.1.patch, > YARN-4849-YARN-3368.javadoc-fix-09082016.1.patch, > YARN-4849-YARN-3368.javadoc-fix-09082016.2.patch, > YARN-4849-YARN-3368.javadoc-fix-09082016.3.patch, > YARN-4849-YARN-3368.license-fix-08172016.1.patch, > YARN-4849-YARN-3368.license-fix-08232016.1.patch, > YARN-4849-YARN-3368.rat-fix-08302016.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5733) Run PerNodeTimelineCollectorsAuxService in a system container
Haibo Chen created YARN-5733: Summary: Run PerNodeTimelineCollectorsAuxService in a system container Key: YARN-5733 URL: https://issues.apache.org/jira/browse/YARN-5733 Project: Hadoop YARN Issue Type: Sub-task Reporter: Haibo Chen Assignee: Haibo Chen This is mostly for tracking yarn-5732 (Run auxiliary services in system containers) because under current implementation, TimelineCollectManager is implemented as an auxiliary service. We'd expect yarn-5732 to be transparent to all auxiliary services, therefore, there is minimal work in here other than checking. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
[ https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573181#comment-15573181 ] Wangda Tan commented on YARN-5145: -- Offline discussed with [~sunilg], The CORS is supported by timeline server v1 only, since now the ats v2 integration to web ui is not merged to this branch, I think we don't need hardcoding proxy settings in the code. Instead, we should add support of CORS to ats v2 so we don't need do hard coding in ui code after ats v2 code merged. And format of the file seems not correct, it mixed tabs and white spaces. > [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR > - > > Key: YARN-5145 > URL: https://issues.apache.org/jira/browse/YARN-5145 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Sunil G > Attachments: 0001-YARN-5145-Run-NewUI-WithOldPort-POC.patch, > YARN-5145-YARN-3368.01.patch, YARN-5145-YARN-3368.02.patch, > YARN-5145-YARN-3368.03.patch, newUIInOldRMWebServer.png > > > Existing YARN UI configuration is under Hadoop package's directory: > $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to > $HADOOP_CONF_DIR like other configurations. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5732) Run auxiliary services in system containers
[ https://issues.apache.org/jira/browse/YARN-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthik Kambatla updated YARN-5732: --- Issue Type: New Feature (was: Task) > Run auxiliary services in system containers > --- > > Key: YARN-5732 > URL: https://issues.apache.org/jira/browse/YARN-5732 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Haibo Chen >Assignee: Haibo Chen > > Auxiliary services today are run within the same node manager process. This > is undesirable because issues within auxiliary services can take down the > whole node manager process. To have better isolation, we can launch auxiliary > services in system containers, which is a concept that we don't have in YARN > today. As a bonus point, we have monitor the resource usage of auxiliary > services if they run in separate containers. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5638) Introduce a collector timestamp to uniquely identify collectors creation order in collector discovery
[ https://issues.apache.org/jira/browse/YARN-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Li Lu updated YARN-5638: Attachment: YARN-5638-YARN-5355.v5.patch V5 patch to address two issues discovered with real cluster testing: 1. make sure all registering collectors are removed when an app finishes. 2. fixes an occasional client NPE issue on RM transitioned from standby to active. > Introduce a collector timestamp to uniquely identify collectors creation > order in collector discovery > - > > Key: YARN-5638 > URL: https://issues.apache.org/jira/browse/YARN-5638 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Li Lu >Assignee: Li Lu > Attachments: YARN-5638-YARN-5355.v4.patch, > YARN-5638-YARN-5355.v5.patch, YARN-5638-trunk.v1.patch, > YARN-5638-trunk.v2.patch, YARN-5638-trunk.v3.patch > > > As discussed in YARN-3359, we need to further identify timeline collectors' > creation order to rebuild collector discovery data in the RM. This JIRA > proposes to useto order collectors > for each application in the RM. This timestamp can then be used when a > standby RM becomes active and rebuild collector discovery data. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5719) Enforce a C standard for native container-executor
[ https://issues.apache.org/jira/browse/YARN-5719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573147#comment-15573147 ] Chris Douglas commented on YARN-5719: - [~aw] would you mind taking a look? > Enforce a C standard for native container-executor > -- > > Key: YARN-5719 > URL: https://issues.apache.org/jira/browse/YARN-5719 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager >Reporter: Chris Douglas > Attachments: YARN-5719.000.patch > > > The {{container-executor}} build should declare the C standard it uses. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (YARN-5719) Enforce a C standard for native container-executor
[ https://issues.apache.org/jira/browse/YARN-5719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated YARN-5719: Comment: was deleted (was: [~aw] would you mind taking a look?) > Enforce a C standard for native container-executor > -- > > Key: YARN-5719 > URL: https://issues.apache.org/jira/browse/YARN-5719 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager >Reporter: Chris Douglas > Attachments: YARN-5719.000.patch > > > The {{container-executor}} build should declare the C standard it uses. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5719) Enforce a C standard for native container-executor
[ https://issues.apache.org/jira/browse/YARN-5719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573148#comment-15573148 ] Chris Douglas commented on YARN-5719: - [~aw] would you mind taking a look? > Enforce a C standard for native container-executor > -- > > Key: YARN-5719 > URL: https://issues.apache.org/jira/browse/YARN-5719 > Project: Hadoop YARN > Issue Type: Task > Components: nodemanager >Reporter: Chris Douglas > Attachments: YARN-5719.000.patch > > > The {{container-executor}} build should declare the C standard it uses. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5732) Run auxiliary services in system containers
[ https://issues.apache.org/jira/browse/YARN-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-5732: - Description: Auxiliary services today are run within the same node manager process. This is undesirable because issues within auxiliary services can take down the whole node manager process. To have better isolation, we can launch auxiliary services in system containers, which is a concept that we don't have in YARN today. As a bonus point, we have monitor the resource usage of auxiliary services if they run in separate containers. (was: Auxiliary services today are run within the same node manager process. This is desirable because issues within auxiliary services can take down the whole node manager process. To have better isolation, we can launch auxiliary services in system containers, which is a concept that we don't have in YARN today. As a bonus point, we have monitor the resource usage of auxiliary services if they run in separate containers.) > Run auxiliary services in system containers > --- > > Key: YARN-5732 > URL: https://issues.apache.org/jira/browse/YARN-5732 > Project: Hadoop YARN > Issue Type: Task >Reporter: Haibo Chen >Assignee: Haibo Chen > > Auxiliary services today are run within the same node manager process. This > is undesirable because issues within auxiliary services can take down the > whole node manager process. To have better isolation, we can launch auxiliary > services in system containers, which is a concept that we don't have in YARN > today. As a bonus point, we have monitor the resource usage of auxiliary > services if they run in separate containers. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5732) Run auxiliary services in system containers
Haibo Chen created YARN-5732: Summary: Run auxiliary services in system containers Key: YARN-5732 URL: https://issues.apache.org/jira/browse/YARN-5732 Project: Hadoop YARN Issue Type: Task Reporter: Haibo Chen Assignee: Haibo Chen Auxiliary services today are run within the same node manager process. This is desirable because issues within auxiliary services can take down the whole node manager process. To have better isolation, we can launch auxiliary services in system containers, which is a concept that we don't have in YARN today. As a bonus point, we have monitor the resource usage of auxiliary services if they run in separate containers. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy
[ https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573093#comment-15573093 ] Hadoop QA commented on YARN-2009: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 69 new + 178 unchanged - 30 fixed = 247 total (was 208) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 13s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 21s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 5s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 53m 53s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | | org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.IntraQueueCandidatesSelector$TAPriorityComparator implements Comparator but not Serializable At IntraQueueCandidatesSelector.java:Serializable At IntraQueueCandidatesSelector.java:[lines 43-53] | | | org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.IntraQueueCandidatesSelector$TAReverseComparator implements Comparator but not Serializable At IntraQueueCandidatesSelector.java:Serializable At IntraQueueCandidatesSelector.java:[lines 57-67] | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833185/YARN-2009.0006.patch | | JIRA Issue | YARN-2009 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux ea0388f75bb8 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 332a61f | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | |
[jira] [Commented] (YARN-5729) Bug fixes identified during testing
[ https://issues.apache.org/jira/browse/YARN-5729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573034#comment-15573034 ] Hadoop QA commented on YARN-5729: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 8s {color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s {color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s {color} | {color:green} yarn-native-services passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 10s {color} | {color:red} hadoop-yarn-services-api in yarn-native-services failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} yarn-native-services passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 29s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api in yarn-native-services has 14 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s {color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api: The patch generated 1 new + 166 unchanged - 2 fixed = 167 total (was 168) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 7s {color} | {color:red} hadoop-yarn-services-api in the patch failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 33s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api generated 2 new + 5 unchanged - 9 fixed = 7 total (was 14) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s {color} | {color:green} hadoop-yarn-services-api in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 17s {color} | {color:red} The patch generated 10 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 42s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api | | | Null passed for non-null parameter of org.apache.hadoop.yarn.services.resource.Application.setLaunchTime(Date) in org.apache.hadoop.yarn.services.api.impl.ApplicationApiService.populateAppData(Application, JsonObject, JsonObject, JsonObject) Method invoked at ApplicationApiService.java:of org.apache.hadoop.yarn.services.resource.Application.setLaunchTime(Date) in org.apache.hadoop.yarn.services.api.impl.ApplicationApiService.populateAppData(Application, JsonObject, JsonObject, JsonObject) Method invoked at ApplicationApiService.java:[line 919] | | | Null passed for non-null parameter of org.apache.hadoop.yarn.services.resource.Container.setLaunchTime(Date) in org.apache.hadoop.yarn.services.api.impl.ApplicationApiService.populateAppData(Application, JsonObject, JsonObject, JsonObject) Method invoked at ApplicationApiService.java:of
[jira] [Commented] (YARN-5729) Bug fixes identified during testing
[ https://issues.apache.org/jira/browse/YARN-5729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573032#comment-15573032 ] Jian He commented on YARN-5729: --- - why is this needed ? {code} this.launchTime = (Date) launchTime.clone(); {code} - Does below code need to use the new "getDefaultComponentAsList(Application app)" method as well ? {code} if (updateAppData.getNumberOfContainers() != null && updateAppData.getComponents() == null) { updateAppData.setComponents(getDefaultComponentAsList()); } {code} > Bug fixes identified during testing > --- > > Key: YARN-5729 > URL: https://issues.apache.org/jira/browse/YARN-5729 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Gour Saha > Fix For: yarn-native-services > > Attachments: YARN-5729-yarn-native-services.001.patch > > > Use this to apply bug fixes identified during testing. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy
[ https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572984#comment-15572984 ] Eric Payne commented on YARN-2009: -- [~sunilg], I think you may have missed my comment from [above|https://issues.apache.org/jira/browse/YARN-2009?focusedCommentId=15553303=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15553303]: - {{FifoIntraQueuePreemptionPlugin#calculateIdealAssignedResourcePerApp}} -- The assignment to {{tmpApp.idealAssigned}} should be cloned: {code} tmpApp.idealAssigned = Resources.min(rc, clusterResource, queueTotalUnassigned, appIdealAssigned); ... Resources.subtractFrom(queueTotalUnassigned, tmpApp.idealAssigned); {code} -- In the above code, if {{queueTotalUnassigned}} is less than {{appIdealAssigned}}, then {{tmpApp.idealAssigned}} is assigned a reference to {{queueTotalUnassigned}}. Then, later, {{tmpApp.idealAssigned}} is actually subtracted from itself. So, I think the above code should be: {code} tmpApp.idealAssigned = Resources.clone(Resources.min(...)); {code} > Priority support for preemption in ProportionalCapacityPreemptionPolicy > --- > > Key: YARN-2009 > URL: https://issues.apache.org/jira/browse/YARN-2009 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler >Reporter: Devaraj K >Assignee: Sunil G > Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, > YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, > YARN-2009.0006.patch > > > While preempting containers based on the queue ideal assignment, we may need > to consider preempting the low priority application containers first. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy
[ https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572955#comment-15572955 ] Eric Payne commented on YARN-2009: -- Thanks [~sunilg]. In order to backport YARN-2009 to 2.8, I think that at least YARN-4108 and YARN-4822 would need to also be backported to 2.8. > Priority support for preemption in ProportionalCapacityPreemptionPolicy > --- > > Key: YARN-2009 > URL: https://issues.apache.org/jira/browse/YARN-2009 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler >Reporter: Devaraj K >Assignee: Sunil G > Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, > YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, > YARN-2009.0006.patch > > > While preempting containers based on the queue ideal assignment, we may need > to consider preempting the low priority application containers first. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy
[ https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-2009: -- Attachment: YARN-2009.0006.patch Updating new patch after fixing few jenkins issues. Also added nodelabel tests. [~leftnoteasy] and [~eepayne] pls help to check. [~eepayne], i think we can bring this to branch-2. I will check whether other surgical preemption patches are for 2.8. > Priority support for preemption in ProportionalCapacityPreemptionPolicy > --- > > Key: YARN-2009 > URL: https://issues.apache.org/jira/browse/YARN-2009 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler >Reporter: Devaraj K >Assignee: Sunil G > Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, > YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, > YARN-2009.0006.patch > > > While preempting containers based on the queue ideal assignment, we may need > to consider preempting the low priority application containers first. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5325) Stateless ARMRMProxy policies implementation
[ https://issues.apache.org/jira/browse/YARN-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572865#comment-15572865 ] Hadoop QA commented on YARN-5325: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s {color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 0s {color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 5 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 59s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 44s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s {color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 12s {color} | {color:green} The patch generated 0 new + 74 unchanged - 1 fixed = 74 total (was 75) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 49s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 49s {color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 14s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833176/YARN-5325-YARN-2915.12.patch | | JIRA Issue | YARN-5325 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle shellcheck shelldocs | | uname | Linux bdfecc5d9ae2 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-2915 / 0bf6bbb | | Default Java | 1.8.0_101 | | shellcheck | v0.4.4 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/13379/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/13379/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common | | Console output |
[jira] [Commented] (YARN-5325) Stateless ARMRMProxy policies implementation
[ https://issues.apache.org/jira/browse/YARN-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572826#comment-15572826 ] Carlo Curino commented on YARN-5325: [~subru] I believe I address all your asks, let's wait for usual checkstyle (as I changed lots of stuff and IntelliJ doesn't enforce exactly what checkstyle wants), and then we are good to commit (if green). > Stateless ARMRMProxy policies implementation > > > Key: YARN-5325 > URL: https://issues.apache.org/jira/browse/YARN-5325 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Affects Versions: YARN-2915 >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-5325-YARN-2915.05.patch, > YARN-5325-YARN-2915.06.patch, YARN-5325-YARN-2915.07.patch, > YARN-5325-YARN-2915.08.patch, YARN-5325-YARN-2915.09.patch, > YARN-5325-YARN-2915.10.patch, YARN-5325-YARN-2915.11.patch, > YARN-5325-YARN-2915.12.patch, YARN-5325.01.patch, YARN-5325.02.patch, > YARN-5325.03.patch, YARN-5325.04.patch > > > This JIRA tracks policies in the AMRMProxy that decide how to forward > ResourceRequests, without maintaining substantial state across decissions > (e.g., broadcast). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5325) Stateless ARMRMProxy policies implementation
[ https://issues.apache.org/jira/browse/YARN-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-5325: --- Attachment: YARN-5325-YARN-2915.12.patch > Stateless ARMRMProxy policies implementation > > > Key: YARN-5325 > URL: https://issues.apache.org/jira/browse/YARN-5325 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Affects Versions: YARN-2915 >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-5325-YARN-2915.05.patch, > YARN-5325-YARN-2915.06.patch, YARN-5325-YARN-2915.07.patch, > YARN-5325-YARN-2915.08.patch, YARN-5325-YARN-2915.09.patch, > YARN-5325-YARN-2915.10.patch, YARN-5325-YARN-2915.11.patch, > YARN-5325-YARN-2915.12.patch, YARN-5325.01.patch, YARN-5325.02.patch, > YARN-5325.03.patch, YARN-5325.04.patch > > > This JIRA tracks policies in the AMRMProxy that decide how to forward > ResourceRequests, without maintaining substantial state across decissions > (e.g., broadcast). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572814#comment-15572814 ] Rohith Sharma K S commented on YARN-5699: - Test failure is not related to patch, i.e YARN-5679 handles it. Checkstyle is related to patch, I will fix it and upload a new patch. Before that I would like to hear review comments if any so that I can upload single patch. > Retrospect yarn entity fields which are publishing in events info fields. > - > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, > 0002-YARN-5699.YARN-5355.patch, 0002-YARN-5699.patch > > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy
[ https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572765#comment-15572765 ] Eric Payne commented on YARN-2009: -- [~sunilg], do you plan on backporting this to branch-2/branch-2.8? > Priority support for preemption in ProportionalCapacityPreemptionPolicy > --- > > Key: YARN-2009 > URL: https://issues.apache.org/jira/browse/YARN-2009 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler >Reporter: Devaraj K >Assignee: Sunil G > Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, > YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch > > > While preempting containers based on the queue ideal assignment, we may need > to consider preempting the low priority application containers first. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5690) Integrate native services modules into maven build
[ https://issues.apache.org/jira/browse/YARN-5690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572727#comment-15572727 ] Billie Rinaldi commented on YARN-5690: -- I am working on a fix for the long line. bq. We can probably remove the un relevant entries in the slideram-log4j.properties? A bunch of them seems for yarn classes. I think this is so we can have logging from yarn classes in the Slider AM log. > Integrate native services modules into maven build > -- > > Key: YARN-5690 > URL: https://issues.apache.org/jira/browse/YARN-5690 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Billie Rinaldi > Attachments: YARN-5690-yarn-native-services.001.patch > > > The yarn dist assembly should include jars for the new modules as well as > their new dependencies. We may want to create new lib directories in the > tarball for the dependencies of the slider-core and services API modules, to > avoid adding these dependencies into the general YARN classpath. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5725) Test uncaught exception in TestContainersMonitorResourceChange.testContainersResourceChange when setting IP and host
[ https://issues.apache.org/jira/browse/YARN-5725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572688#comment-15572688 ] Miklos Szegedi commented on YARN-5725: -- The unit test failure is YARN-5377 (13/Jul/16), so it is unlikely that it is a regression. > Test uncaught exception in > TestContainersMonitorResourceChange.testContainersResourceChange when setting > IP and host > > > Key: YARN-5725 > URL: https://issues.apache.org/jira/browse/YARN-5725 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi >Priority: Minor > Attachments: YARN-5725.000.patch, YARN-5725.001.patch > > Original Estimate: 2h > Remaining Estimate: 2h > > The issue is a warning but it prevents container monitor to continue > 2016-10-12 14:38:23,280 WARN [Container Monitor] > monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(594)) - > Uncaught exception in ContainersMonitorImpl while monitoring resource of > container_123456_0001_01_01 > java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl$MonitoringThread.run(ContainersMonitorImpl.java:455) > 2016-10-12 14:38:23,281 WARN [Container Monitor] > monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(613)) - > org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl > is interrupted. Exiting. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5377) TestQueuingContainerManager.testKillMultipleOpportunisticContainers fails in trunk
[ https://issues.apache.org/jira/browse/YARN-5377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572681#comment-15572681 ] Miklos Szegedi commented on YARN-5377: -- This happened again in YARN-5725 at one of the patches. I was not able to reproduce it locally. {code} Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 134.743 sec <<< FAILURE! - in org.apache.hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager testKillMultipleOpportunisticContainers(org.apache.hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager) Time elapsed: 32.169 sec <<< FAILURE! java.lang.AssertionError: ContainerState is not correct (timedout) expected: but was: at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.apache.hadoop.yarn.server.nodemanager.containermanager.BaseContainerManagerTest.waitForNMContainerState(BaseContainerManagerTest.java:368) at org.apache.hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager.testKillMultipleOpportunisticContainers(TestQueuingContainerManager.java:470) {code} I am wondering about the root cause since the timeout is already 40 seconds. {code} BaseContainerManagerTest.waitForNMContainerState(containerManager, createContainerId(0), ContainerState.DONE, 40); {code} > TestQueuingContainerManager.testKillMultipleOpportunisticContainers fails in > trunk > -- > > Key: YARN-5377 > URL: https://issues.apache.org/jira/browse/YARN-5377 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S > > Test case fails jenkin build > [link|https://builds.apache.org/job/PreCommit-YARN-Build/12228/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt] > {noformat} > Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 134.586 sec > <<< FAILURE! - in > org.apache.hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager > testKillMultipleOpportunisticContainers(org.apache.hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager) > Time elapsed: 32.134 sec <<< FAILURE! > java.lang.AssertionError: ContainerState is not correct (timedout) > expected: but was: > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.BaseContainerManagerTest.waitForNMContainerState(BaseContainerManagerTest.java:363) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager.testKillMultipleOpportunisticContainers(TestQueuingContainerManager.java:470) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5673) [Umbrella] Re-write container-executor to improve security, extensibility, and portability
[ https://issues.apache.org/jira/browse/YARN-5673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572625#comment-15572625 ] Sidharta Seethana commented on YARN-5673: - [~templedf], I think this is restricted to the container-executor binary, specifically, not ContainerExecutor. You are right, though - about additional functionality being pushed into lower layers and the use ENV variables. This former was to ensure backward compatibility at the ContainerExecutor interface layer. The latter was to ensure a) no changes to protocols b) allow for existing apps to use the new functionality without changes (e.g MR, Spark). We should discuss these in a new/different JIRA, though. > [Umbrella] Re-write container-executor to improve security, extensibility, > and portability > -- > > Key: YARN-5673 > URL: https://issues.apache.org/jira/browse/YARN-5673 > Project: Hadoop YARN > Issue Type: New Feature > Components: nodemanager >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Attachments: container-executor Re-write Design Document.pdf > > > As YARN adds support for new features that require administrator > privileges(such as support for network throttling and docker), we’ve had to > add new capabilities to the container-executor. This has led to a recognition > that the current container-executor security features as well as the code > could be improved. The current code is fragile and it’s hard to add new > features without causing regressions. Some of the improvements that need to > be made are - > *Security* > Currently the container-executor has limited security features. It relies > primarily on the permissions set on the binary but does little additional > security beyond that. There are few outstanding issues today - > - No audit log > - No way to disable features - network throttling and docker support are > built in and there’s no way to turn them off at a container-executor level > - Code can be improved - a lot of the code switches users back and forth in > an arbitrary manner > - No input validation - the paths, and files provided at invocation are not > validated or required to be in some specific location > - No signing functionality - there is no way to enforce that the binary was > invoked by the NM and not by any other process > *Code Issues* > The code layout and implementation themselves can be improved. Some issues > there are - > - No support for log levels - everything is logged and this can’t be turned > on or off > - Extremely long set of invocation parameters(specifically during container > launch) which makes turning features on or off complicated > - Poor test coverage - it’s easy to introduce regressions today due to the > lack of a proper test setup > - Duplicate functionality - there is some amount of code duplication > - Hard to make improvements or add new features due to the issues raised above > *Portability* > - The container-executor mixes platform dependent APIs with platform > independent APIs making it hard to run it on multiple platforms. Allowing it > to run on multiple platforms also improves the overall code structure . > One option is to improve the existing container-executor, however it might be > easier to start from scratch. That allows existing functionality to be > supported until we are ready to switch to the new code. > This umbrella JIRA is to capture all the work required for the new code. I'm > going to work on a design doc for the changes - any suggestions or > improvements are welcome. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5715) introduce entity prefix for return and sort order
[ https://issues.apache.org/jira/browse/YARN-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572451#comment-15572451 ] Varun Saxena commented on YARN-5715: bq. Here, we can not use bytes as directory. I think need to ignore entityPrefix for file system storage and carry on with default sorting order. Should be fine for this patch. We do not really maintain FS implementation as it was test only. There was some discussion regarding some implementation for just trying out ATSv2 when we merged our branch to trunk. If we enhance FS implementation as part of it, we can relook at this, then. We may decide to ignore it as well or probably pad it with zeroes (as long cant be greater than 9223372036854775807). I will have a closer look at the patch by tomorrow so as to move this JIRA forward. > introduce entity prefix for return and sort order > - > > Key: YARN-5715 > URL: https://issues.apache.org/jira/browse/YARN-5715 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Sangjin Lee >Assignee: Rohith Sharma K S >Priority: Critical > Attachments: YARN-5715-YARN-5355.01.patch, > YARN-5715-YARN-5355.02.patch, YARN-5715-YARN-5355.03.patch > > > While looking into YARN-5585, we have come across the need to provide a sort > order different than the current entity id order. The current entity id order > returns entities strictly in the lexicographical order, and as such it > returns the earliest entities first. This may not be the most natural return > order. A more natural return/sort order would be from the most recent > entities. > To solve this, we would like to add what we call the "entity prefix" in the > row key for the entity table. It is a number (long) that can be easily > provided by the client on write. In the row key, it would be added before the > entity id itself. > The entity prefix would be considered mandatory. On all writes (including > updates) the correct entity prefix should be set by the client so that the > correct row key is used. The entity prefix needs to be unique only within the > scope of the application and the entity type. > For queries that return a list of entities, the prefix values will be > returned along with the entity id's. Queries that specify the prefix and the > id should be returned quickly using the row key. If the query omits the > prefix but specifies the id (query by id), the query may be less efficient. > This JIRA should add the entity prefix to the entity API and add its handling > to the schema and the write path. The read path will be addressed in > YARN-5585. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST
[ https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572419#comment-15572419 ] Varun Saxena commented on YARN-5561: I was also thinking that we can have an endpoint like /ws/v2/applicationhistory. This can be used to not only serve app attempt.container reports but also serve use cases like serving aggregated logs of historical apps. We would need to serve aggregated logs from somewhere too. This will be useful for UI as well. Regarding metrics, well we can always extend ContainerInfo to carry metrics as well. Right ? > [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and > entities via REST > --- > > Key: YARN-5561 > URL: https://issues.apache.org/jira/browse/YARN-5561 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5561.YARN-5355.patch, YARN-5561.02.patch, > YARN-5561.03.patch, YARN-5561.patch, YARN-5561.v0.patch > > > ATSv2 model lacks retrieval of {{list-of-all-apps}}, > {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via > REST API's. And also it is required to know about all the entities in an > applications. > It is pretty much highly required these URLs for Web UI. > New REST URL would be > # GET {{/ws/v2/timeline/apps}} > # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}. > # GET > {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}} > # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of > entities that can be queried. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5689) Update native services REST API to use agentless docker provider
[ https://issues.apache.org/jira/browse/YARN-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572390#comment-15572390 ] Gour Saha commented on YARN-5689: - Quick note, this patch was created by [~billie.rinaldi] corresponding to her patch in YARN-5505. > Update native services REST API to use agentless docker provider > > > Key: YARN-5689 > URL: https://issues.apache.org/jira/browse/YARN-5689 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Gour Saha > Fix For: yarn-native-services > > Attachments: YARN-5689-yarn-native-services.001.patch, > YARN-5689-yarn-native-services.002.patch > > > The initial version of the native services REST API uses the agent provider. > It should be converted to use the new docker provider instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5667) Move HBase backend code in ATS v2 into its separate module
[ https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572341#comment-15572341 ] Sangjin Lee commented on YARN-5667: --- {{mvn depdency:analyze}} should be reliable once you built it ({{mvn install}} or {{mvn package}}). There are two parts to this: used but undeclared dependencies and unused declared dependencies. I normally find used but undeclared dependencies to be accurate with few exceptions. I believe it is based on bytecode analysis, and it should be highly accurate. Sometimes the referred classes can be in multiple artifacts, and that might confuse the dependency plugin, but that's about it. On the other hand, unused declared dependencies are tricky. In most cases, these are runtime dependencies that are truly needed but escape bytecode analysis. Unfortunately one would just have to take a look at them one by one. > Move HBase backend code in ATS v2 into its separate module > --- > > Key: YARN-5667 > URL: https://issues.apache.org/jira/browse/YARN-5667 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: New module structure.png, part1.yarn5667.prelim.patch, > part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, > part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, > pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, > pt4.yarn5667.001.patch, pt5.yarn5667.001.patch, pt6.yarn5667.001.patch, > pt9.yarn5667.001.patch, yarn5667-001.tar.gz > > > The HBase backend code currently lives along with the core ATS v2 code in > hadoop-yarn-server-timelineservice module. Because Resource Manager depends > on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM > module on HBase modules is introduced (HBase backend is pluggable, so we do > not need to directly pull in HBase jars). > In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop > 3, we encountered a circular dependency during our builds between HBase2.0 > and Hadoop3 artifacts. > {code} > hadoop-mapreduce-client-common, hadoop-yarn-client, > hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, > hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, > hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common] > {code} > This jira proposes we move all HBase-backend-related code from > hadoop-yarn-server-timelineservice into its own module (possible name is > yarn-server-timelineservice-storage) so that core RM modules do not depend on > HBase modules any more. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5271) ATS client doesn't work with Jersey 2 on the classpath
[ https://issues.apache.org/jira/browse/YARN-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572236#comment-15572236 ] Steve Loughran commented on YARN-5271: -- ATS 1.5 Is the filesystem outputter; ATSv1.0 is REST only. I don't know about anything else ... not been keeping current with this > ATS client doesn't work with Jersey 2 on the classpath > -- > > Key: YARN-5271 > URL: https://issues.apache.org/jira/browse/YARN-5271 > Project: Hadoop YARN > Issue Type: Bug > Components: client, timelineserver >Affects Versions: 2.7.2 >Reporter: Steve Loughran > > see SPARK-15343 : once Jersey 2 is on the CP, you can't instantiate a > timeline client, *even if the server is an ATS1.5 server and publishing is > via the FS* -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5731) Preemption does not work in few corner cases when reservations are placed
[ https://issues.apache.org/jira/browse/YARN-5731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-5731: -- Summary: Preemption does not work in few corner cases when reservations are placed (was: Preemption does not work in few corner cases where reservations are placed) > Preemption does not work in few corner cases when reservations are placed > - > > Key: YARN-5731 > URL: https://issues.apache.org/jira/browse/YARN-5731 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 2.7.3 >Reporter: Sunil G >Assignee: Sunil G > > Preemption doesnt kick in under below scenario. > Two queues A and B each with 50% capacity and 100% maximum capacity and user > limit factor 2. Job which is submitted to queueA has taken 95% of resources > in cluster. Remaining 5% was also reserved by same job as demand was still > higher. > Now submit a small job with AM container size is lesser to above mentioned > 5%. Job waits and no preemption is happening. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5731) Preemption does not work in few corner cases where reservations are placed
Sunil G created YARN-5731: - Summary: Preemption does not work in few corner cases where reservations are placed Key: YARN-5731 URL: https://issues.apache.org/jira/browse/YARN-5731 Project: Hadoop YARN Issue Type: Bug Components: capacity scheduler Affects Versions: 2.7.3 Reporter: Sunil G Assignee: Sunil G Preemption doesnt kick in under below scenario. Two queues A and B each with 50% capacity and 100% maximum capacity and user limit factor 2. Job which is submitted to queueA has taken 95% of resources in cluster. Remaining 5% was also reserved by same job as demand was still higher. Now submit a small job with AM container size is lesser to above mentioned 5%. Job waits and no preemption is happening. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
[ https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572149#comment-15572149 ] Hadoop QA commented on YARN-5145: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 3m 43s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch 51 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 4m 16s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:b17 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833137/YARN-5145-YARN-3368.03.patch | | JIRA Issue | YARN-5145 | | Optional Tests | asflicense | | uname | Linux 3b401ae43533 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-3368 / 60c8810 | | whitespace | https://builds.apache.org/job/PreCommit-YARN-Build/13378/artifact/patchprocess/whitespace-eol.txt | | whitespace | https://builds.apache.org/job/PreCommit-YARN-Build/13378/artifact/patchprocess/whitespace-tabs.txt | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/13378/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR > - > > Key: YARN-5145 > URL: https://issues.apache.org/jira/browse/YARN-5145 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Sunil G > Attachments: 0001-YARN-5145-Run-NewUI-WithOldPort-POC.patch, > YARN-5145-YARN-3368.01.patch, YARN-5145-YARN-3368.02.patch, > YARN-5145-YARN-3368.03.patch, newUIInOldRMWebServer.png > > > Existing YARN UI configuration is under Hadoop package's directory: > $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to > $HADOOP_CONF_DIR like other configurations. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
[ https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15572127#comment-15572127 ] Sunil G edited comment on YARN-5145 at 10/13/16 2:45 PM: - Thanks [~leftnoteasy] for sharing the test results.. I think issue was with CORS from client side. Client also has to query with cors address. I have made necessary changes and tested in 5node cluster where timeline v2 server is running. RM has some config to avoid cors issue, however timeline v2 has cors issue hence we must use corsproxy for timeline v2 for a brief time till timeline supports cors. Hence I have made cors as mandatory in this patch. Pls help to check the same.. cc/[~leftnoteasy] and [~Sreenath] was (Author: sunilg): Thanks [~leftnoteasy] for sharing the test results.. I think there was with CORS from client side. Client also has to query with cors address. I have made necessary changes and tested in 5node cluster where timeline v2 server is running. RM has some config to avoid cors issue, however timeline v2 has cors issue hence we must use corsproxy for timeline v2 for a brief time till timeline supports cors. Hence I have made cors as mandatory in this patch. Pls help to check the same.. cc/[~leftnoteasy] and [~Sreenath] > [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR > - > > Key: YARN-5145 > URL: https://issues.apache.org/jira/browse/YARN-5145 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Sunil G > Attachments: 0001-YARN-5145-Run-NewUI-WithOldPort-POC.patch, > YARN-5145-YARN-3368.01.patch, YARN-5145-YARN-3368.02.patch, > YARN-5145-YARN-3368.03.patch, newUIInOldRMWebServer.png > > > Existing YARN UI configuration is under Hadoop package's directory: > $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to > $HADOOP_CONF_DIR like other configurations. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
[ https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-5145: -- Attachment: YARN-5145-YARN-3368.03.patch Thanks [~leftnoteasy] for sharing the test results.. I think there was with CORS from client side. Client also has to query with cors address. I have made necessary changes and tested in 5node cluster where timeline v2 server is running. RM has some config to avoid cors issue, however timeline v2 has cors issue hence we must use corsproxy for timeline v2 for a brief time till timeline supports cors. Hence I have made cors as mandatory in this patch. Pls help to check the same.. cc/[~leftnoteasy] and [~Sreenath] > [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR > - > > Key: YARN-5145 > URL: https://issues.apache.org/jira/browse/YARN-5145 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Kai Sasaki > Attachments: 0001-YARN-5145-Run-NewUI-WithOldPort-POC.patch, > YARN-5145-YARN-3368.01.patch, YARN-5145-YARN-3368.02.patch, > YARN-5145-YARN-3368.03.patch, newUIInOldRMWebServer.png > > > Existing YARN UI configuration is under Hadoop package's directory: > $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to > $HADOOP_CONF_DIR like other configurations. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
[ https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G reassigned YARN-5145: - Assignee: Sunil G (was: Kai Sasaki) > [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR > - > > Key: YARN-5145 > URL: https://issues.apache.org/jira/browse/YARN-5145 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Sunil G > Attachments: 0001-YARN-5145-Run-NewUI-WithOldPort-POC.patch, > YARN-5145-YARN-3368.01.patch, YARN-5145-YARN-3368.02.patch, > YARN-5145-YARN-3368.03.patch, newUIInOldRMWebServer.png > > > Existing YARN UI configuration is under Hadoop package's directory: > $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to > $HADOOP_CONF_DIR like other configurations. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5730) Improve setting of environment variable HOME in containers
Markus Döring created YARN-5730: --- Summary: Improve setting of environment variable HOME in containers Key: YARN-5730 URL: https://issues.apache.org/jira/browse/YARN-5730 Project: Hadoop YARN Issue Type: Improvement Components: nodemanager Environment: ContainerLaunch.java Reporter: Markus Döring Priority: Minor Currently, the HOME environment variable for a YARN container is determined as follows[1]: # if the (undocumented) configuration {{yarn.nodemanager.user-home-dir}} is set, HOME is set to its value # otherwise, HOME is set to {{"/home/"}} Option 1 is suboptimal in a multi-user environment, while the default does not help at all. It would be nice if we could do one of the following: # default to HOME unset # default to {{"/home/" + container.getUser()}} # get HOME from container Option 1 would at least inform the process about the problem, but would obviously cause some problems in programs that assume HOME to be set. Option 2 might point to the correct home, at least it's not more incorrect than {{"/home/"}}. Option 3 might be the best choice, but it also requires API changes. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle
[ https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15571974#comment-15571974 ] Varun Vasudev commented on YARN-4597: - Thanks for the patch [~asuresh]. 1) I agree with [~kasha] on {quote}The methods for killing containers as needed all seem to be hardcoded to only consider allocated resources. Can we abstract it out further to allow for passing either allocation or utilization based on whether oversubscription is enabled.{quote} At some point, people will want to be able to plug in policies to decide which containers to kill. However, I wouldn't hold up the patch for it. 2) The changes to BaseContainerManagerTest.java seem unnecessary. 3) Can you explain why we need the synchronized block here - {code} + synchronized (this.containersAllocation) {code} > Add SCHEDULE to NM container lifecycle > -- > > Key: YARN-4597 > URL: https://issues.apache.org/jira/browse/YARN-4597 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Reporter: Chris Douglas >Assignee: Arun Suresh > Attachments: YARN-4597.001.patch, YARN-4597.002.patch > > > Currently, the NM immediately launches containers after resource > localization. Several features could be more cleanly implemented if the NM > included a separate stage for reserving resources. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5708) Implement APIs to get resource profiles from the RM
[ https://issues.apache.org/jira/browse/YARN-5708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15571869#comment-15571869 ] Hadoop QA commented on YARN-5708: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 40s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 37s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 6s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 7s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 30s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 59s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 27s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 35s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s {color} | {color:green} YARN-3926 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 57s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 13s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s {color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 2 new + 156 unchanged - 0 fixed = 158 total (was 156) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 14s {color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client generated 2 new + 155 unchanged - 0 fixed = 157 total (was 155) {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s {color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 19s {color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 6s {color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 37m 3s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 18s {color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 118m 25s {color} | {color:green} hadoop-mapreduce-client-jobclient in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 232m 57s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests |
[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15571847#comment-15571847 ] Hadoop QA commented on YARN-5699: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 47s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 54s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 42s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 26s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 28s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 1 new + 102 unchanged - 11 fixed = 103 total (was 113) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 4s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s {color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 58s {color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 3m 29s {color} | {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 59s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 84m 40s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833093/0002-YARN-5699.patch | | JIRA Issue | YARN-5699 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 5a112343ca6d 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 901eca0 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 |
[jira] [Updated] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.
[ https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-5699: Attachment: 0002-YARN-5699.patch Updating the patch with following changes. Patch is rebased against trunk as this need to commit in trunk. # Added container finish time to get published on container finish event. container create time is not added, user can take this time from entity created time itself. # appUpdated event publish kept unmodified as per review comment. # Removed duplicated information getting published during attempt start and attempt end time as per Varun's review comment. # tracking URL and original trackingURL published in entity info level. Original tracking URL life time is same as application run time. When application is running, client need to construct URL with proxy/without proxy as it is done for rendering AppsBlock. # At last, all the constants are renamed to *_INFO rather than *_EVENT_INFO or *_ENTITY_INFO. > Retrospect yarn entity fields which are publishing in events info fields. > - > > Key: YARN-5699 > URL: https://issues.apache.org/jira/browse/YARN-5699 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, > 0002-YARN-5699.YARN-5355.patch, 0002-YARN-5699.patch > > > Currently, all the container information are published at 2 places. Some of > them are at entity info(top-level) and some are at event info. > For containers, some of the event info should be published at container info > level. For example : container exist status, container state, createdTime, > finished time. These are general information to container required for > container-report. So it is better to publish at top level info field. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5271) ATS client doesn't work with Jersey 2 on the classpath
[ https://issues.apache.org/jira/browse/YARN-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15571406#comment-15571406 ] Weiwei Yang commented on YARN-5271: --- Hi [~ste...@apache.org] I am trying to help on this one. It looks like this Jersey client was only used for ATS v2, I already saw a flag {{timelineServiceV2}} in TimelineClientImpl class, can we only init this instance when the flag is true? So in theory a timeline client would work with Jersey 2 but talk to ATS v1 using the file system writer? > ATS client doesn't work with Jersey 2 on the classpath > -- > > Key: YARN-5271 > URL: https://issues.apache.org/jira/browse/YARN-5271 > Project: Hadoop YARN > Issue Type: Bug > Components: client, timelineserver >Affects Versions: 2.7.2 >Reporter: Steve Loughran > > see SPARK-15343 : once Jersey 2 is on the CP, you can't instantiate a > timeline client, *even if the server is an ATS1.5 server and publishing is > via the FS* -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5729) Bug fixes identified during testing
[ https://issues.apache.org/jira/browse/YARN-5729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gour Saha updated YARN-5729: Attachment: YARN-5729-yarn-native-services.001.patch > Bug fixes identified during testing > --- > > Key: YARN-5729 > URL: https://issues.apache.org/jira/browse/YARN-5729 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Gour Saha > Fix For: yarn-native-services > > Attachments: YARN-5729-yarn-native-services.001.patch > > > Use this to apply bug fixes identified during testing. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5715) introduce entity prefix for return and sort order
[ https://issues.apache.org/jira/browse/YARN-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15571362#comment-15571362 ] Hadoop QA commented on YARN-5715: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 11s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 28s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 16s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 27s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 29s {color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s {color} | {color:green} YARN-5355 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 15s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 35s {color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 2 new + 3 unchanged - 0 fixed = 5 total (was 3) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s {color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 43s {color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 26m 15s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833075/YARN-5715-YARN-5355.03.patch | | JIRA Issue | YARN-5715 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux fcfe85d0c387 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-5355 / 5d7ad39 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/13375/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/13375/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice U: hadoop-yarn-project/hadoop-yarn | | Console output |
[jira] [Updated] (YARN-5708) Implement APIs to get resource profiles from the RM
[ https://issues.apache.org/jira/browse/YARN-5708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Vasudev updated YARN-5708: Attachment: YARN-5708-YARN-3926.004.patch Uploaded a new patch to fix the checkstyles and java doc warnings. > Implement APIs to get resource profiles from the RM > --- > > Key: YARN-5708 > URL: https://issues.apache.org/jira/browse/YARN-5708 > Project: Hadoop YARN > Issue Type: Sub-task > Components: client >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Attachments: YARN-5708-YARN-3926.001.patch, > YARN-5708-YARN-3926.002.patch, YARN-5708-YARN-3926.003.patch, > YARN-5708-YARN-3926.004.patch > > > Implement a set of APIs to get the available resource profiles from the RM. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5708) Implement APIs to get resource profiles from the RM
[ https://issues.apache.org/jira/browse/YARN-5708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15571327#comment-15571327 ] Hadoop QA commented on YARN-5708: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 4m 26s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 3s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 11s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 30s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 54s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 28s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 29s {color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s {color} | {color:green} YARN-3926 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 54s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 29s {color} | {color:red} root: The patch generated 1 new + 313 unchanged - 0 fixed = 314 total (was 313) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 57s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 18s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s {color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 4 new + 156 unchanged - 0 fixed = 160 total (was 156) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s {color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client generated 4 new + 155 unchanged - 0 fixed = 159 total (was 155) {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s {color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 21s {color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 3s {color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 37m 21s {color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 32s {color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 115m 7s {color} | {color:green} hadoop-mapreduce-client-jobclient in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 232m 26s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests |
[jira] [Updated] (YARN-5715) introduce entity prefix for return and sort order
[ https://issues.apache.org/jira/browse/YARN-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-5715: Attachment: YARN-5715-YARN-5355.03.patch Updated patch with following delta changes from previous # Changed object *Long* to primitive *long* in TimelineEntity object. And all the subsequent getter of idPrefix is changed to primitive long # Currently I have added support for storing FileSystemTimelineWriterImpl. IdPrefix is used as directory in where entities are stored. i.e cluster_id/user_id/flow_name/flow_version/12345678/app_id/world/*0*/hello.thist. Again problem with FileSystemStorage support is sorting !!! Here, we can not use bytes as directory. I think need to ignore entityPrefix for file system storage and carry on with default sorting order. Thoughts? > introduce entity prefix for return and sort order > - > > Key: YARN-5715 > URL: https://issues.apache.org/jira/browse/YARN-5715 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Sangjin Lee >Assignee: Rohith Sharma K S >Priority: Critical > Attachments: YARN-5715-YARN-5355.01.patch, > YARN-5715-YARN-5355.02.patch, YARN-5715-YARN-5355.03.patch > > > While looking into YARN-5585, we have come across the need to provide a sort > order different than the current entity id order. The current entity id order > returns entities strictly in the lexicographical order, and as such it > returns the earliest entities first. This may not be the most natural return > order. A more natural return/sort order would be from the most recent > entities. > To solve this, we would like to add what we call the "entity prefix" in the > row key for the entity table. It is a number (long) that can be easily > provided by the client on write. In the row key, it would be added before the > entity id itself. > The entity prefix would be considered mandatory. On all writes (including > updates) the correct entity prefix should be set by the client so that the > correct row key is used. The entity prefix needs to be unique only within the > scope of the application and the entity type. > For queries that return a list of entities, the prefix values will be > returned along with the entity id's. Queries that specify the prefix and the > id should be returned quickly using the row key. If the query omits the > prefix but specifies the id (query by id), the query may be less efficient. > This JIRA should add the entity prefix to the entity API and add its handling > to the schema and the write path. The read path will be addressed in > YARN-5585. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-5729) Bug fixes identified during testing
[ https://issues.apache.org/jira/browse/YARN-5729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gour Saha reassigned YARN-5729: --- Assignee: Gour Saha > Bug fixes identified during testing > --- > > Key: YARN-5729 > URL: https://issues.apache.org/jira/browse/YARN-5729 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Gour Saha > Fix For: yarn-native-services > > > Use this to apply bug fixes identified during testing. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-5729) Bug fixes identified during testing
Gour Saha created YARN-5729: --- Summary: Bug fixes identified during testing Key: YARN-5729 URL: https://issues.apache.org/jira/browse/YARN-5729 Project: Hadoop YARN Issue Type: Sub-task Reporter: Gour Saha Fix For: yarn-native-services Use this to apply bug fixes identified during testing. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5388) MAPREDUCE-6719 requires changes to DockerContainerExecutor
[ https://issues.apache.org/jira/browse/YARN-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15571140#comment-15571140 ] Sidharta Seethana commented on YARN-5388: - [~templedf] branch-2 patch : should we update {{DockerContainerExecutor.md.vm}} here as well? > MAPREDUCE-6719 requires changes to DockerContainerExecutor > -- > > Key: YARN-5388 > URL: https://issues.apache.org/jira/browse/YARN-5388 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Fix For: 2.9.0 > > Attachments: YARN-5388.001.patch, YARN-5388.002.patch, > YARN-5388.branch-2.001.patch, YARN-5388.branch-2.002.patch > > > Because the {{DockerContainerExecuter}} overrides the {{writeLaunchEnv()}} > method, it must also have the wildcard processing logic from > YARN-4958/YARN-5373 added to it. Without it, the use of -libjars will fail > unless wildcarding is disabled. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5388) MAPREDUCE-6719 requires changes to DockerContainerExecutor
[ https://issues.apache.org/jira/browse/YARN-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15571092#comment-15571092 ] Sidharta Seethana commented on YARN-5388: - [~templedf] Apologies for the late response. Took at look at the trunk patch - I think {{DockerContainerExecutor.md.vm}} needs to be removed as well. I also see a reference to {{DockerContainerExecutor}} in {{hadoop-project/src/site/site.xml}} . > MAPREDUCE-6719 requires changes to DockerContainerExecutor > -- > > Key: YARN-5388 > URL: https://issues.apache.org/jira/browse/YARN-5388 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Fix For: 2.9.0 > > Attachments: YARN-5388.001.patch, YARN-5388.002.patch, > YARN-5388.branch-2.001.patch, YARN-5388.branch-2.002.patch > > > Because the {{DockerContainerExecuter}} overrides the {{writeLaunchEnv()}} > method, it must also have the wildcard processing logic from > YARN-4958/YARN-5373 added to it. Without it, the use of -libjars will fail > unless wildcarding is disabled. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org