[jira] [Created] (YARN-6622) Document Docker work as experimental
Varun Vasudev created YARN-6622: --- Summary: Document Docker work as experimental Key: YARN-6622 URL: https://issues.apache.org/jira/browse/YARN-6622 Project: Hadoop YARN Issue Type: Task Reporter: Varun Vasudev Assignee: Varun Vasudev We should update the Docker support documentation calling out the Docker work as experimental. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6614) Deprecate DistributedSchedulingProtocol and add required fields directly to ApplicationMasterProtocol
[ https://issues.apache.org/jira/browse/YARN-6614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015245#comment-16015245 ] Hadoop QA commented on YARN-6614: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 2s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in trunk has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 49s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 5 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 16s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 8m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 2s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 51s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 20 new + 117 unchanged - 28 fixed = 137 total (was 145) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 14s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common generated 5 new + 1 unchanged - 0 fixed = 6 total (was 1) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 0 new + 227 unchanged - 4 fixed = 227 total (was 231) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 35s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 31s{color} | {color:red} h
[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue
[ https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015233#comment-16015233 ] Sunil G commented on YARN-2113: --- I tested in DRF and it works for me. I will wait for Eric's confirmation as well. If there are some minor nits, i ll update another patch post Eric's confirmation. > Add cross-user preemption within CapacityScheduler's leaf-queue > --- > > Key: YARN-2113 > URL: https://issues.apache.org/jira/browse/YARN-2113 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler >Reporter: Vinod Kumar Vavilapalli >Assignee: Sunil G > Attachments: IntraQueue Preemption-Impact Analysis.pdf, > TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt, > YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, > YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, > YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, > YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, > YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, > YARN-2113.0016.patch, YARN-2113.0017.patch, YARN-2113.0018.patch, > YARN-2113.apply.onto.0012.ericp.patch, YARN-2113 Intra-QueuePreemption > Behavior.pdf, YARN-2113.v0.patch > > > Preemption today only works across queues and moves around resources across > queues per demand and usage. We should also have user-level preemption within > a queue, to balance capacity across users in a predictable manner. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue
[ https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015229#comment-16015229 ] Wangda Tan commented on YARN-2113: -- Discussed with [~sunilg] offline, I think the approach in [~sunilg]'s patch is correct, and it looks like the most straightforward approach. I'm OK with continue this approach and get the patch committed since it is related to intra-queue preemption feature. [~eepayne], could you help to check if the latest patch works? > Add cross-user preemption within CapacityScheduler's leaf-queue > --- > > Key: YARN-2113 > URL: https://issues.apache.org/jira/browse/YARN-2113 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler >Reporter: Vinod Kumar Vavilapalli >Assignee: Sunil G > Attachments: IntraQueue Preemption-Impact Analysis.pdf, > TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt, > YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, > YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, > YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, > YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, > YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, > YARN-2113.0016.patch, YARN-2113.0017.patch, YARN-2113.0018.patch, > YARN-2113.apply.onto.0012.ericp.patch, YARN-2113 Intra-QueuePreemption > Behavior.pdf, YARN-2113.v0.patch > > > Preemption today only works across queues and moves around resources across > queues per demand and usage. We should also have user-level preemption within > a queue, to balance capacity across users in a predictable manner. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object
[ https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015209#comment-16015209 ] Wangda Tan commented on YARN-6593: -- Thanks [~kkaranasos], bq. Self-reference works fine even with the current protobuf version. I think I saw a single use of it in our yarn_protos (I think it was in the QueueInfo or something related). Sounds good, thanks for confirmation. bq. (1) the proto becomes too complicated (and I am afraid we will be the only ones understanding it This statement is true. :) bq. (2) the client will still have access to very unrelated getters and setters. This part is avoidable, we can avoid add un-related getters and setters to Java API class, and access PB methods in -PBImpl class. For example: {code} class TargetConstraint { // Only expose what required get/setTargetExpression(); get/setScope(); } class TargetConstraintPBImpl implements TargetConstraint { void setTargetExpression(...) { // Access fields in Proto in -PBImpl only SimplePlacementConstraintProtoBuilder.setTargetExpression(..); } } {code} Please let me know what you think. > [API] Introduce Placement Constraint object > --- > > Key: YARN-6593 > URL: https://issues.apache.org/jira/browse/YARN-6593 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > Attachments: YARN-6593.001.patch > > > This JIRA introduces an object for defining placement constraints. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6615) AmIpFilter drops query parameters on redirect
[ https://issues.apache.org/jira/browse/YARN-6615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015204#comment-16015204 ] Ruslan Dautkhanov commented on YARN-6615: - Thanks a lot [~wilfreds] for such prompt response. > AmIpFilter drops query parameters on redirect > - > > Key: YARN-6615 > URL: https://issues.apache.org/jira/browse/YARN-6615 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha2 >Reporter: Wilfred Spiegelenburg >Assignee: Wilfred Spiegelenburg > Attachments: YARN-6615.1.patch, YARN-6615-branch-2.6.1.patch, > YARN-6615-branch-2.8.1.patch > > > When an AM web request is redirected to the RM the query parameters are > dropped from the web request. > This happens for Spark as described in SPARK-20772. > The repro steps are: > - Start up the spark-shell in yarn mode and run a job > - Try to access the job details through http://:4040/jobs/job?id=0 > - A HTTP ERROR 400 is thrown (requirement failed: missing id parameter) > This works fine in local or standalone mode, but does not work on Yarn where > the query parameter is dropped. If the UI filter > org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter is removed from > the config which shows that the problem is in the filter -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6615) AmIpFilter drops query parameters on redirect
[ https://issues.apache.org/jira/browse/YARN-6615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015181#comment-16015181 ] Hadoop QA commented on YARN-6615: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 51s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 47s{color} | {color:red} root in branch-2.6.1 failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} branch-2.6.1 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s{color} | {color:green} branch-2.6.1 passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} branch-2.6.1 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} branch-2.6.1 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} branch-2.6.1 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 24s{color} | {color:green} branch-2.6.1 passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 9s{color} | {color:red} hadoop-yarn-server-web-proxy in branch-2.6.1 failed with JDK v1.8.0_131. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} branch-2.6.1 passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 8s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s{color} | {color:red} The patch has 1088 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 25s{color} | {color:red} The patch 73 line(s) with tabs. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 40s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 9s{color} | {color:red} hadoop-yarn-server-web-proxy in the patch failed with JDK v1.8.0_131. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 20s{color} | {color:red} hadoop-yarn-server-web-proxy in the patch failed with JDK v1.7.0_131. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 30s{color} | {color:red} The patch generated 98 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 44s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy | | | HTTP parameter directly written to HTTP header output in org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(ServletRequest, ServletResponse, FilterChain) At AmIpFilter.java:HTTP header output in org.apache.had
[jira] [Commented] (YARN-6577) Remove unused ContainerLocalization classes
[ https://issues.apache.org/jira/browse/YARN-6577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015180#comment-16015180 ] Hudson commented on YARN-6577: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11748 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11748/]) YARN-6577. Remove unused ContainerLocalization classes. Contributed by (cdouglas: rev b23fcc86c670aa896151a2bd8878154d7bb45d13) * (delete) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerLocalizationImpl.java * (delete) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerLocalization.java > Remove unused ContainerLocalization classes > --- > > Key: YARN-6577 > URL: https://issues.apache.org/jira/browse/YARN-6577 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 2.7.3, 3.0.0-alpha2 >Reporter: ZhangBing Lin >Assignee: ZhangBing Lin >Priority: Minor > Fix For: 2.8.1, 3.0.0-alpha3 > > Attachments: YARN-6577.001.patch > > > From 2.7.3 and 3.0.0-alpha2, the ContainerLocalization interface and the > ContainerLocalizationImpl implementation class are of no use, and I recommend > removing the useless interface and implementation classes -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6615) AmIpFilter drops query parameters on redirect
[ https://issues.apache.org/jira/browse/YARN-6615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015173#comment-16015173 ] Hadoop QA commented on YARN-6615: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 48s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 48s{color} | {color:red} root in branch-2.6.1 failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} branch-2.6.1 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s{color} | {color:green} branch-2.6.1 passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} branch-2.6.1 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s{color} | {color:green} branch-2.6.1 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} branch-2.6.1 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 23s{color} | {color:green} branch-2.6.1 passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 9s{color} | {color:red} hadoop-yarn-server-web-proxy in branch-2.6.1 failed with JDK v1.8.0_131. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} branch-2.6.1 passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 2s{color} | {color:red} The patch has 769 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 22s{color} | {color:red} The patch 73 line(s) with tabs. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 36s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 8s{color} | {color:red} hadoop-yarn-server-web-proxy in the patch failed with JDK v1.8.0_131. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 18s{color} | {color:red} hadoop-yarn-server-web-proxy in the patch failed with JDK v1.7.0_131. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 28s{color} | {color:red} The patch generated 98 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 27s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy | | | HTTP parameter directly written to HTTP header output in org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(ServletRequest, ServletResponse, FilterChain) At AmIpFilter.java:HTTP header output in org.apache.hado
[jira] [Commented] (YARN-6378) Negative usedResources memory in CapacityScheduler
[ https://issues.apache.org/jira/browse/YARN-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015172#comment-16015172 ] Ravi Prakash commented on YARN-6378: Hi powerinf! It is possible. Do you have the CapacityScheduler? If you have the FairScheduler, YARN-3933 may be relevant > Negative usedResources memory in CapacityScheduler > -- > > Key: YARN-6378 > URL: https://issues.apache.org/jira/browse/YARN-6378 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler, resourcemanager >Affects Versions: 2.7.2 >Reporter: Ravi Prakash >Assignee: Ravi Prakash > > Courtesy Thomas Nystrand, we found that on two of our clusters configured > with the CapacityScheduler, usedResources occasionally becomes negative. > e.g. > {code} > 2017-03-15 11:10:09,449 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: > assignedContainer application attempt=appattempt_1487222361993_17177_01 > container=Container: [ContainerId: container_1487222361993_17177_01_14, > NodeId: :27249, NodeHttpAddress: :8042, Resource: > , Priority: 2, Token: null, ] queue=: > capacity=0.2, absoluteCapacity=0.2, usedResources=, > usedCapacity=0.03409091, absoluteUsedCapacity=0.006818182, numApps=1, > numContainers=3 clusterResource= type=RACK_LOCAL > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6577) Remove unused ContainerLocalization classes
[ https://issues.apache.org/jira/browse/YARN-6577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015160#comment-16015160 ] ZhangBing Lin commented on YARN-6577: - [~chris.douglas],Thank you for your review and commit! > Remove unused ContainerLocalization classes > --- > > Key: YARN-6577 > URL: https://issues.apache.org/jira/browse/YARN-6577 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 2.7.3, 3.0.0-alpha2 >Reporter: ZhangBing Lin >Assignee: ZhangBing Lin >Priority: Minor > Fix For: 2.8.1, 3.0.0-alpha3 > > Attachments: YARN-6577.001.patch > > > From 2.7.3 and 3.0.0-alpha2, the ContainerLocalization interface and the > ContainerLocalizationImpl implementation class are of no use, and I recommend > removing the useless interface and implementation classes -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6615) AmIpFilter drops query parameters on redirect
[ https://issues.apache.org/jira/browse/YARN-6615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wilfred Spiegelenburg updated YARN-6615: Attachment: YARN-6615-branch-2.6.1.patch Patch for branch-2.6, slightly different than the branch-2.8 patch because there is no ProxyUtils yet The trunk patch applies to branch-2 also which seems to have covered all open branches > AmIpFilter drops query parameters on redirect > - > > Key: YARN-6615 > URL: https://issues.apache.org/jira/browse/YARN-6615 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha2 >Reporter: Wilfred Spiegelenburg >Assignee: Wilfred Spiegelenburg > Attachments: YARN-6615.1.patch, YARN-6615-branch-2.6.1.patch, > YARN-6615-branch-2.8.1.patch > > > When an AM web request is redirected to the RM the query parameters are > dropped from the web request. > This happens for Spark as described in SPARK-20772. > The repro steps are: > - Start up the spark-shell in yarn mode and run a job > - Try to access the job details through http://:4040/jobs/job?id=0 > - A HTTP ERROR 400 is thrown (requirement failed: missing id parameter) > This works fine in local or standalone mode, but does not work on Yarn where > the query parameter is dropped. If the UI filter > org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter is removed from > the config which shows that the problem is in the filter -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6577) Remove unused ContainerLocalization classes
[ https://issues.apache.org/jira/browse/YARN-6577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated YARN-6577: Summary: Remove unused ContainerLocalization classes (was: Useless interface and implementation class) > Remove unused ContainerLocalization classes > --- > > Key: YARN-6577 > URL: https://issues.apache.org/jira/browse/YARN-6577 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 2.7.3, 3.0.0-alpha2 >Reporter: ZhangBing Lin >Assignee: ZhangBing Lin >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: YARN-6577.001.patch > > > From 2.7.3 and 3.0.0-alpha2, the ContainerLocalization interface and the > ContainerLocalizationImpl implementation class are of no use, and I recommend > removing the useless interface and implementation classes -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6615) AmIpFilter drops query parameters on redirect
[ https://issues.apache.org/jira/browse/YARN-6615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wilfred Spiegelenburg updated YARN-6615: Attachment: YARN-6615-branch-2.8.1.patch As requested a patch for branch-2.8. This patch also applied to branch-2.7 > AmIpFilter drops query parameters on redirect > - > > Key: YARN-6615 > URL: https://issues.apache.org/jira/browse/YARN-6615 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha2 >Reporter: Wilfred Spiegelenburg >Assignee: Wilfred Spiegelenburg > Attachments: YARN-6615.1.patch, YARN-6615-branch-2.8.1.patch > > > When an AM web request is redirected to the RM the query parameters are > dropped from the web request. > This happens for Spark as described in SPARK-20772. > The repro steps are: > - Start up the spark-shell in yarn mode and run a job > - Try to access the job details through http://:4040/jobs/job?id=0 > - A HTTP ERROR 400 is thrown (requirement failed: missing id parameter) > This works fine in local or standalone mode, but does not work on Yarn where > the query parameter is dropped. If the UI filter > org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter is removed from > the config which shows that the problem is in the filter -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6602) Impersonation does not work if standby RM is contacted first
[ https://issues.apache.org/jira/browse/YARN-6602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015084#comment-16015084 ] Hadoop QA commented on YARN-6602: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 41s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 13s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 29s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 50s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 22 unchanged - 0 fixed = 23 total (was 22) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common generated 0 new + 4573 unchanged - 1 fixed = 4573 total (was 4574) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 26s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 35s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 15s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6602 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12868641/YARN-6602.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux cfc14441b07e 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ef9e536 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/15954/artifact/patchprocess/branch-findbugs-hadoop-yarn-projec
[jira] [Created] (YARN-6621) Validator for Placement Constraints
Konstantinos Karanasos created YARN-6621: Summary: Validator for Placement Constraints Key: YARN-6621 URL: https://issues.apache.org/jira/browse/YARN-6621 Project: Hadoop YARN Issue Type: Sub-task Reporter: Konstantinos Karanasos This library will be used to validate placement constraints. It can serve multiple validation purposes: 1) Check if the placement constraint has a valid form (e.g., a cardinality constraint should not have an associated target expression, a DELAYED_OR compound expression should only appear in specific places in a constraint tree, etc.) 2) Check if the constraints given by a user are conflicting (e.g., cardinality more than 5 in a host and less than 3 in a rack). 3) Check that the constraints are properly added in the Placement Constraint Manager. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5705) [YARN-3368] Add support for Timeline V2 to new web UI
[ https://issues.apache.org/jira/browse/YARN-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015038#comment-16015038 ] Haibo Chen commented on YARN-5705: -- [~akhilpb] The latest patches no longer applies. Any plan to get this in for ATS v2? It will be very appealing for users that try out ATSv2. > [YARN-3368] Add support for Timeline V2 to new web UI > - > > Key: YARN-5705 > URL: https://issues.apache.org/jira/browse/YARN-5705 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sunil G >Assignee: Akhil PB > Labels: oct16-hard > Attachments: YARN-5705.001.patch, YARN-5705.002.patch, > YARN-5705.003.patch, YARN-5705.004.patch, YARN-5705.005.patch, > YARN-5705.006.patch, YARN-5705.007.patch, YARN-5705.008.patch, > YARN-5705.009.patch, YARN-5705.010.patch, YARN-5705.011.patch, > YARN-5705.012.patch, YARN-5705.013.patch, YARN-5705.014.patch, > YARN-5705-YARN-3368.001.patch, YARN-5705-YARN-3368.002.patch, > YARN-5705-YARN-3368.003.patch, YARN-5705-YARN-3368.004.patch > > > Integrate timeline v2 to YARN-3368. This is a clone JIRA for YARN-4097 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6594) [API] Introduce SchedulingRequest object
[ https://issues.apache.org/jira/browse/YARN-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015037#comment-16015037 ] Konstantinos Karanasos commented on YARN-6594: -- Cool, thanks [~leftnoteasy]. Will fix the @Stable. Also need to add the tests in the PBImpl that you pointed out in YARN-6593 too. > [API] Introduce SchedulingRequest object > > > Key: YARN-6594 > URL: https://issues.apache.org/jira/browse/YARN-6594 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > Attachments: YARN-6594.001.patch > > > This JIRA introduces a new SchedulingRequest object. > It will be part of the {{AllocateRequest}} and will be used to define sizing > (e.g., number of allocations, size of allocations) and placement constraints > for allocations. > Applications can use either this new object (when rich placement constraints > are required) or the existing {{ResourceRequest}} object. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6593) [API] Introduce Placement Constraint object
[ https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015028#comment-16015028 ] Konstantinos Karanasos edited comment on YARN-6593 at 5/18/17 1:13 AM: --- Thanks for the comments guys. So, we all agree on the validator, will open a JIRA to track this. +1 on having a string representation and a library to convert it to constraints. I think it will become very useful soon, but agreed it is not urgent. Self-reference works fine even with the current protobuf version. I think I saw a single use of it in our yarn_protos (I think it was in the QueueInfo or something related). That said, both approaches make sense to me. I guess the approach in the current patch is a little bit more cumbersome to right (we will have one extra call to create a PlacementConstraint). Is that its disadvantage or there is something else too? On the other hand, what I don't like if we coalesce the two, is that (1) the proto becomes too complicated (and I am afraid we will be the only ones understanding it :)), and (2) the client will still have access to very unrelated getters and setters. For example, the user could be creating a CompoundConstraint and have access to a getTargetExpression() function. was (Author: kkaranasos): Thanks for the comments guys. So, we all agree on the validator, will open a JIRA to track this. +1 on having a string representation and a library to convert it to constraints. I think it will become very useful soon, but agreed it is not urgent. Self-reference works fine even with the current protobuf version. I think I saw a single use of it in our yarn_protos (I think it was in the QueueInfo or something related). That said, both approaches make sense to me. I guess the approach in the current JIRA is a little bit more cumbersome to right (we will have one extra call to create a PlacementConstraint). Is that its disadvantage or there is something else too? On the other hand, what I don't like if we coalesce the two, is that (1) the proto becomes too complicated (and I am afraid we will be the only ones understanding it :)), and (2) the client will still have access to very unrelated getters and setters. For example, the user could be creating a CompoundConstraint and have access to a getTargetExpression() function. > [API] Introduce Placement Constraint object > --- > > Key: YARN-6593 > URL: https://issues.apache.org/jira/browse/YARN-6593 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > Attachments: YARN-6593.001.patch > > > This JIRA introduces an object for defining placement constraints. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object
[ https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015028#comment-16015028 ] Konstantinos Karanasos commented on YARN-6593: -- Thanks for the comments guys. So, we all agree on the validator, will open a JIRA to track this. +1 on having a string representation and a library to convert it to constraints. I think it will become very useful soon, but agreed it is not urgent. Self-reference works fine even with the current protobuf version. I think I saw a single use of it in our yarn_protos (I think it was in the QueueInfo or something related). That said, both approaches make sense to me. I guess the approach in the current JIRA is a little bit more cumbersome to right (we will have one extra call to create a PlacementConstraint). Is that its disadvantage or there is something else too? On the other hand, what I don't like if we coalesce the two, is that (1) the proto becomes too complicated (and I am afraid we will be the only ones understanding it :)), and (2) the client will still have access to very unrelated getters and setters. For example, the user could be creating a CompoundConstraint and have access to a getTargetExpression() function. > [API] Introduce Placement Constraint object > --- > > Key: YARN-6593 > URL: https://issues.apache.org/jira/browse/YARN-6593 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > Attachments: YARN-6593.001.patch > > > This JIRA introduces an object for defining placement constraints. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6594) [API] Introduce SchedulingRequest object
[ https://issues.apache.org/jira/browse/YARN-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015021#comment-16015021 ] Wangda Tan commented on YARN-6594: -- Thanks [~kkaranasos], from a rough look, generally looks good to me. Few comments: 1) There's a {{@Stable}} method, is it better to change to unstable since all others are unstable? 2) ExecutionType is added to SchedulingRequest, I just not sure if it is a best place to add. Let's keep it now and revisit it in the future. > [API] Introduce SchedulingRequest object > > > Key: YARN-6594 > URL: https://issues.apache.org/jira/browse/YARN-6594 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > Attachments: YARN-6594.001.patch > > > This JIRA introduces a new SchedulingRequest object. > It will be part of the {{AllocateRequest}} and will be used to define sizing > (e.g., number of allocations, size of allocations) and placement constraints > for allocations. > Applications can use either this new object (when rich placement constraints > are required) or the existing {{ResourceRequest}} object. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6602) Impersonation does not work if standby RM is contacted first
[ https://issues.apache.org/jira/browse/YARN-6602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Kanter updated YARN-6602: Attachment: YARN-6602.002.patch The 002 patch - Fixes the (relevant) checkstyle warnings, including making {{RMProxy.user}} private. > Impersonation does not work if standby RM is contacted first > > > Key: YARN-6602 > URL: https://issues.apache.org/jira/browse/YARN-6602 > Project: Hadoop YARN > Issue Type: Bug > Components: client >Affects Versions: 3.0.0-alpha3 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Blocker > Attachments: YARN-6602.001.patch, YARN-6602.002.patch > > > When RM HA is enabled, impersonation does not work correctly if the Yarn > Client connects to the standby RM first. When this happens, the > impersonation is "lost" and the client does things on behalf of the > impersonator user. We saw this with the OOZIE-1770 Oozie on Yarn feature. > I need to investigate this some more, but it appears to be related to > delegation tokens. When this issue occurs, the tokens have the owner as > "oozie" instead of the actual user. On a hunch, we found a workaround that > explicitly adding a correct RM HA delegation token fixes the problem: > {code:java} > org.apache.hadoop.yarn.api.records.Token token = > yarnClient.getRMDelegationToken(ClientRMProxy.getRMDelegationTokenService(conf)); > org.apache.hadoop.security.token.Token token2 = new > org.apache.hadoop.security.token.Token(token.getIdentifier().array(), > token.getPassword().array(), new Text(token.getKind()), new > Text(token.getService())); > UserGroupInformation.getCurrentUser().addToken(token2); > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object
[ https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015013#comment-16015013 ] Wangda Tan commented on YARN-6593: -- [~asuresh], All make sense to me, bq. But I think the problem with that is I think we would require self-referential structs. Good point, let's check if it works. bq. I would also like to have library that might take a string representation of the Constraints +1, I think we can use an existing format, such as JSON / YAML. This can be done in a separate patch (with lower priority I think). > [API] Introduce Placement Constraint object > --- > > Key: YARN-6593 > URL: https://issues.apache.org/jira/browse/YARN-6593 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > Attachments: YARN-6593.001.patch > > > This JIRA introduces an object for defining placement constraints. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6602) Impersonation does not work if standby RM is contacted first
[ https://issues.apache.org/jira/browse/YARN-6602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015009#comment-16015009 ] Robert Kanter commented on YARN-6602: - Thanks [~kasha]. Checkstyle actually complained about that already :) > Impersonation does not work if standby RM is contacted first > > > Key: YARN-6602 > URL: https://issues.apache.org/jira/browse/YARN-6602 > Project: Hadoop YARN > Issue Type: Bug > Components: client >Affects Versions: 3.0.0-alpha3 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Blocker > Attachments: YARN-6602.001.patch > > > When RM HA is enabled, impersonation does not work correctly if the Yarn > Client connects to the standby RM first. When this happens, the > impersonation is "lost" and the client does things on behalf of the > impersonator user. We saw this with the OOZIE-1770 Oozie on Yarn feature. > I need to investigate this some more, but it appears to be related to > delegation tokens. When this issue occurs, the tokens have the owner as > "oozie" instead of the actual user. On a hunch, we found a workaround that > explicitly adding a correct RM HA delegation token fixes the problem: > {code:java} > org.apache.hadoop.yarn.api.records.Token token = > yarnClient.getRMDelegationToken(ClientRMProxy.getRMDelegationTokenService(conf)); > org.apache.hadoop.security.token.Token token2 = new > org.apache.hadoop.security.token.Token(token.getIdentifier().array(), > token.getPassword().array(), new Text(token.getKind()), new > Text(token.getService())); > UserGroupInformation.getCurrentUser().addToken(token2); > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object
[ https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015008#comment-16015008 ] Arun Suresh commented on YARN-6593: --- Thanks for the work on this [~kkaranasos] bq. For example, PlacementConstraints: Inside the PlacementConstraintProto, we can add CompoundType / Children / DelayCriteria / Everything in the SimplePlacementConstraintProto. And a field to indicate if it is a CompoundConstraint / (Simple)TargetConstraint / (Simple)CardinalityConstraint. >From a Client point of view, I tend to agree with [~leftnoteasy]. But I think >the problem with that is I think we would require self-referential structs. >essentially, the {{PlacementConstraintProto}} of type _Compound_ should >container a repeated {{PlacementConstraintProto}} field. Am not sure this is >supported in protobufs currently. W.r.t Validations, as discussed offline with [~kkaranasos], This is to enforce correctness of the expression which is currently not enforcable via the protobuf language. For eg: * We need to enforce that a compound expression should have aleast 2 child placement expressions. W.r.t use of the Builder. As much as I like the Builder, it might be difficult to use it to build nested expressions. +1 for a Validation library, which can be reused both in the AMRMClient and the ClientRMService. I would also like to have library that might take a string representation of the Constraints - as what was defined in the doc, and convert it to a proto / API object. This can be tackled later I guess. > [API] Introduce Placement Constraint object > --- > > Key: YARN-6593 > URL: https://issues.apache.org/jira/browse/YARN-6593 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > Attachments: YARN-6593.001.patch > > > This JIRA introduces an object for defining placement constraints. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue
[ https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015002#comment-16015002 ] Wangda Tan commented on YARN-2113: -- Thanks [~sunilg] / [~eepayne] for chasing the issue. Since this is likely caused by a known issue: YARN-6538 which is related to DRF, and it only happens when priority-first is enabled. Do you think is it make sense to only allow USERLIMIT_FIRST in the patch. (Don't change any internal logics, but do not allow config PRIORIRY_FIRST when doing config validation). I'm a little concerned that fixing the issue on a 100K+ patch is hard for review. If rest of the patch looks fine, we can commit it, fix it with YARN-6538, and re-enable PRIORITY_FIRST. Thoughts? > Add cross-user preemption within CapacityScheduler's leaf-queue > --- > > Key: YARN-2113 > URL: https://issues.apache.org/jira/browse/YARN-2113 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler >Reporter: Vinod Kumar Vavilapalli >Assignee: Sunil G > Attachments: IntraQueue Preemption-Impact Analysis.pdf, > TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt, > YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, > YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, > YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, > YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, > YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, > YARN-2113.0016.patch, YARN-2113.0017.patch, YARN-2113.0018.patch, > YARN-2113.apply.onto.0012.ericp.patch, YARN-2113 Intra-QueuePreemption > Behavior.pdf, YARN-2113.v0.patch > > > Preemption today only works across queues and moves around resources across > queues per demand and usage. We should also have user-level preemption within > a queue, to balance capacity across users in a predictable manner. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6602) Impersonation does not work if standby RM is contacted first
[ https://issues.apache.org/jira/browse/YARN-6602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014997#comment-16014997 ] Karthik Kambatla commented on YARN-6602: The patch looks good to me. One minor comment: RMProxy.user can be private? [~jianhe] - Can you please take a look at this as well? Robert and I debated multiple approaches and agreed this is the simplest and safest change. > Impersonation does not work if standby RM is contacted first > > > Key: YARN-6602 > URL: https://issues.apache.org/jira/browse/YARN-6602 > Project: Hadoop YARN > Issue Type: Bug > Components: client >Affects Versions: 3.0.0-alpha3 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Blocker > Attachments: YARN-6602.001.patch > > > When RM HA is enabled, impersonation does not work correctly if the Yarn > Client connects to the standby RM first. When this happens, the > impersonation is "lost" and the client does things on behalf of the > impersonator user. We saw this with the OOZIE-1770 Oozie on Yarn feature. > I need to investigate this some more, but it appears to be related to > delegation tokens. When this issue occurs, the tokens have the owner as > "oozie" instead of the actual user. On a hunch, we found a workaround that > explicitly adding a correct RM HA delegation token fixes the problem: > {code:java} > org.apache.hadoop.yarn.api.records.Token token = > yarnClient.getRMDelegationToken(ClientRMProxy.getRMDelegationTokenService(conf)); > org.apache.hadoop.security.token.Token token2 = new > org.apache.hadoop.security.token.Token(token.getIdentifier().array(), > token.getPassword().array(), new Text(token.getKind()), new > Text(token.getService())); > UserGroupInformation.getCurrentUser().addToken(token2); > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Resolved] (YARN-4481) negative pending resource of queues lead to applications in accepted status inifnitly
[ https://issues.apache.org/jira/browse/YARN-4481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan resolved YARN-4481. -- Resolution: Duplicate This should be fixed by YARN-4844. Closing as dup. > negative pending resource of queues lead to applications in accepted status > inifnitly > - > > Key: YARN-4481 > URL: https://issues.apache.org/jira/browse/YARN-4481 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 2.7.2 >Reporter: gu-chi >Priority: Critical > Attachments: jmx.txt > > > Met a scenario of negative pending resource with capacity scheduler, in jmx, > it shows: > {noformat} > "PendingMB" : -4096, > "PendingVCores" : -1, > "PendingContainers" : -1, > {noformat} > full jmx infomation attached. > this is not just a jmx UI issue, the actual pending resource of queue is also > negative as I see the debug log of > bq. DEBUG | ResourceManager Event Processor | Skip this queue=root, because > it doesn't need more resource, schedulingMode=RESPECT_PARTITION_EXCLUSIVITY > node-partition= | ParentQueue.java > this lead to the {{NULL_ASSIGNMENT}} > The background is submitting hundreds of applications and consume all cluster > resource and reservation happen. While running, network fault injected by > some tool, injection types are delay,jitter > ,repeat,packet loss and disorder. And then kill most of the applications > submitted. > Anyone also facing negative pending resource, or have idea of how this happen? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6614) Deprecate DistributedSchedulingProtocol and add required fields directly to ApplicationMasterProtocol
[ https://issues.apache.org/jira/browse/YARN-6614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-6614: -- Attachment: YARN-6614.001.patch Attaching intial patch. [~curino] / [~subru] / [~giovanni.fumarola] do take a look. This would greatly simplify the Federation Interceptor (and all other AMRMProxy frameworks) as well: Prior to this, ALL interceptors would have to decide if supports DistributedScheduling or not, even though they have nothing to do with it. This was because the DistributedScheduling was implemented as a wrapper protocol. With this patch, I've essentially moved all the DistributedSchedulingProtocol feilds inside the {{RegistedApplicationMaster}}, {{AllocateRequest}} and {{AllocateResponse}} objects and removed the Protocol. > Deprecate DistributedSchedulingProtocol and add required fields directly to > ApplicationMasterProtocol > - > > Key: YARN-6614 > URL: https://issues.apache.org/jira/browse/YARN-6614 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Arun Suresh >Assignee: Arun Suresh > Attachments: YARN-6614.001.patch > > > The {{DistributedSchedulingProtocol}} was initially designed as a wrapper > protocol over the {{ApplicaitonMasterProtocol}}. > This JIRA proposes to deprecate the protocol itself and move the extra fields > of the {{RegisterDistributedSchedulingAMResponse}} and > {{DistributedSchedulingAllocateResponse}} to the > {{RegisterApplicationMasterResponse}} and {{AllocateResponse}} respectively. > This will simplify the code quite a bit and make it easier to expose it as a > preprocessor. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object
[ https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014940#comment-16014940 ] Wangda Tan commented on YARN-6593: -- Thanks [~kkaranasos] for your explanations. Yeah I think you're correct, for the current PB, we cannot do extend easily on the PB side. However we can still improve Java API: For example, PlacementConstraints: Inside the PlacementConstraintProto, we can add CompoundType / Children / DelayCriteria / Everything in the SimplePlacementConstraintProto. And a field to indicate if it is a CompoundConstraint / (Simple)TargetConstraint / (Simple)CardinalityConstraint. With this, we can at least make Java API can be very clear, same as what we defined in design doc, and Java side takes responsibility to translate user-facing API and PB. Please share your thoughts, +[~asuresh]. bq. My question was more about how do we validate constraints +1 to have a validator in common layer so client side can do validation as well. I think validation has two phases: 1) Client side static validation 2) Server side dynamic validation, #2 will look at cluster state and check things like if constraints added to cluster or not, etc. > [API] Introduce Placement Constraint object > --- > > Key: YARN-6593 > URL: https://issues.apache.org/jira/browse/YARN-6593 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > Attachments: YARN-6593.001.patch > > > This JIRA introduces an object for defining placement constraints. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6602) Impersonation does not work if standby RM is contacted first
[ https://issues.apache.org/jira/browse/YARN-6602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014914#comment-16014914 ] Hadoop QA commented on YARN-6602: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 51s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 15s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 46s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 54s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 7 new + 21 unchanged - 0 fixed = 28 total (was 21) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common generated 0 new + 4573 unchanged - 1 fixed = 4573 total (was 4574) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 35s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 33s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 39s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 17s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6602 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12868609/YARN-6602.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 73146cbb56b4 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / eb7791b | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/15953/artifact/patchprocess/branch-findbugs-hadoop-yarn-project
[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object
[ https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014835#comment-16014835 ] Konstantinos Karanasos commented on YARN-6593: -- Thanks for the feedback, [~leftnoteasy]. Agreed with points (2), (3), (5), and (6) -- will fix them and update the patch. Regarding the rest: bq. PlacementConstraint should be parent class of CompoundPlacementConstraint/SimplePlacementConstraint. Now it added one additional hierarchy to each node in the tree. Can you explain a bit more what you mean here? My goal was to allow arbitrary nesting of compound placement constraints. Given that the protobuf version that we use does not allow subclassing, don't think there is a cleaner way of achieving this. bq. Suggest to follow what we defined in doc, add two separate classes TargetConstraint/CardinalityConstraint, both extends PlacementConstraint. Limit to use 2 out of 3 fields is hard for app developer to understand/use. I added the ConstraintType to show the different types of simple placement constraints, namely target and cardinality (again, given the lack of subclassing in protobuf). Moreover, I wanted to allow later to add constraints that include all three fields, like we mention in the document for cluster admin constraints. I think we can restrict the use of the right fields at each time through the newInstance methods, like I do currently in the patch. But to make it more clear, how about we add a Builder that allows the creation only of targetConstraint() and cardinalityConstraint()? This way we use a SimplePlacementConstraint, but allow creating specific types of simple constraints. bq. I suggest treat allocationTags as value, and key of allocationTags is always empty. Yep, agreed with that. My question was more about how do we validate constraints. I think we should create a validator class, so that we can parse the constraints and decide if they are of the correct format. One example is to not allow a targetKey when you specify a constraint with allocation tags. Another could be to allow ORDERED_OR expressions to be only on top of the hierarchy, etc. Was discussing that yesterday with [~asuresh] too. Of course this is not urgent, but would be good to have. Thoughts? > [API] Introduce Placement Constraint object > --- > > Key: YARN-6593 > URL: https://issues.apache.org/jira/browse/YARN-6593 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > Attachments: YARN-6593.001.patch > > > This JIRA introduces an object for defining placement constraints. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6602) Impersonation does not work if standby RM is contacted first
[ https://issues.apache.org/jira/browse/YARN-6602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Kanter updated YARN-6602: Attachment: YARN-6602.001.patch The 001 patch: - Fixes the problem by making {{RMProxy}} ({{ClientRMProxy}} and {{ServerRMProxy}}) no longer singleton instances so we can store the current UGI when they're created, and then use it when creating the RPC in {{RMProxy#getProxy}} - Removes a deprecated method from {{RMProxy}} (deprecated in Hadoop 2.3.0) - Adds a test that verifies that the proxy user (impersonation) works correctly I've also verified that Oozie on Yarn and the Hadoop CLI are able to submit jobs correctly on both a secure and non-secure cluster. > Impersonation does not work if standby RM is contacted first > > > Key: YARN-6602 > URL: https://issues.apache.org/jira/browse/YARN-6602 > Project: Hadoop YARN > Issue Type: Bug > Components: client >Affects Versions: 3.0.0-alpha3 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Blocker > Attachments: YARN-6602.001.patch > > > When RM HA is enabled, impersonation does not work correctly if the Yarn > Client connects to the standby RM first. When this happens, the > impersonation is "lost" and the client does things on behalf of the > impersonator user. We saw this with the OOZIE-1770 Oozie on Yarn feature. > I need to investigate this some more, but it appears to be related to > delegation tokens. When this issue occurs, the tokens have the owner as > "oozie" instead of the actual user. On a hunch, we found a workaround that > explicitly adding a correct RM HA delegation token fixes the problem: > {code:java} > org.apache.hadoop.yarn.api.records.Token token = > yarnClient.getRMDelegationToken(ClientRMProxy.getRMDelegationTokenService(conf)); > org.apache.hadoop.security.token.Token token2 = new > org.apache.hadoop.security.token.Token(token.getIdentifier().array(), > token.getPassword().array(), new Text(token.getKind()), new > Text(token.getService())); > UserGroupInformation.getCurrentUser().addToken(token2); > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6617) Services API delete call first attempt usually fails
[ https://issues.apache.org/jira/browse/YARN-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014787#comment-16014787 ] Hadoop QA commented on YARN-6617: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 40s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 28s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s{color} | {color:green} yarn-native-services passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 50s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core in yarn-native-services has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 15s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core: The patch generated 1 new + 202 unchanged - 0 fixed = 203 total (was 202) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 41s{color} | {color:green} hadoop-yarn-slider-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 37m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ac17dc | | JIRA Issue | YARN-6617 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12868595/YARN-6617-yarn-native-services.1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 4f6eb931448a 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | yarn-native-services / 08c756e | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/15952/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15952/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core.txt | | Te
[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object
[ https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014767#comment-16014767 ] Wangda Tan commented on YARN-6593: -- [~kkaranasos], thanks for updating the patch, some suggestions: 1) PlacementConstraint should be parent class of CompoundPlacementConstraint/SimplePlacementConstraint. Now it added one additional hierarchy to each node in the tree. 2) Suggest to move delay DelayUnit/delayValue from CompoundPlacementConstraint to a separate class such as DelayCriterion. 3) Suggest to introduce Builder to CompoundPlacementConstraint, with that, YARN app can call: {code} CompoundPlacementConstraint.newBuilder().and/or(, , ...) CompoundPlacementConstraint.newBuilder().and/or(, , ...) CompoundPlacementConstraint.newBuilder().and/or(List, , , ...) {code} 4) SimplePlacementConstraint: Suggest to follow what we defined in doc, add two separate classes TargetConstraint/CardinalityConstraint, both extends PlacementConstraint. Limit to use 2 out of 3 fields is hard for app developer to understand/use. 5) Similarily, PlacementConstraintTarget. Suggest to introduce a Builder, which we can support following creations modes: {code} - toNodeAttributes(op, nodeAttributeKey, List) {code} To your question: bq. // TODO: Should this be checked here or should it be part of a validation? I suggest treat allocationTags as value, and key of allocationTags is always empty. 6) Test: should add all PB classes to {{TestPBRecordImpl}}. > [API] Introduce Placement Constraint object > --- > > Key: YARN-6593 > URL: https://issues.apache.org/jira/browse/YARN-6593 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > Attachments: YARN-6593.001.patch > > > This JIRA introduces an object for defining placement constraints. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6620) [YARN-6223] Support GPU Configuration and Isolation in CGroups.
[ https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014718#comment-16014718 ] Hadoop QA commented on YARN-6620: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 38s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 47s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 53s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 5 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 20s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 52s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 12 new + 213 unchanged - 0 fixed = 225 total (was 213) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 56s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager generated 2 new + 5 unchanged - 0 fixed = 7 total (was 5) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 33s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 55s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 34s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 67m 14s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | | Boxing/unboxing to parse a primitive org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.GpuResourceHandlerImpl.getRequestedGpu(Container) At GpuResourceHandlerImpl.java:org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.GpuResourceHandlerImpl.getRequestedGpu(Container) At GpuResourceHandlerImpl.java:[line 82] | | | org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker.DockerRunCommand.getCommandWithArguments() concatenates strings using + in a loop At DockerRu
[jira] [Commented] (YARN-6608) Backport all SLS improvements from trunk to branch-2
[ https://issues.apache.org/jira/browse/YARN-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014710#comment-16014710 ] Hadoop QA commented on YARN-6608: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 18 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 40s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 28s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 34s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 10s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 23s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 52s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 21s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 21s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 19s{color} | {color:red} hadoop-sls in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 2m 59s{color} | {color:red} root in the patch failed with JDK v1.8.0_131. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 2m 59s{color} | {color:red} root in the patch failed with JDK v1.8.0_131. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 34s{color} | {color:red} root in the patch failed with JDK v1.7.0_121. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 3m 34s{color} | {color:red} root in the patch failed with JDK v1.7.0_121. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 29s{color} | {color:orange} root: The patch generated 37 new + 133 unchanged - 106 fixed = 170 total (was 239) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 29s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 17s{color} | {color:red} hadoop-sls in the patch failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 9s{color} | {color:red} The patch generated 2 new + 487 unchanged - 22 fixed = 489 total (was 509) {color} | | {color:orange}-0{color} | {color:orange} shelldocs {color} | {color:orange} 0m 8s{color} | {color:orange} The patch generated 16 new + 46 unchanged - 0 fixed = 62 total (was 46) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 2 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 23s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | |
[jira] [Commented] (YARN-6493) Print node partition in assignContainer logs
[ https://issues.apache.org/jira/browse/YARN-6493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014708#comment-16014708 ] Wangda Tan commented on YARN-6493: -- Thanks [~jhung], patch LGTM, +1. Will commit soon. > Print node partition in assignContainer logs > > > Key: YARN-6493 > URL: https://issues.apache.org/jira/browse/YARN-6493 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.8.0, 2.7.4, 2.6.6 >Reporter: Jonathan Hung >Assignee: Jonathan Hung > Attachments: YARN-6493.001.patch, YARN-6493.002.patch, > YARN-6493.003.patch, YARN-6493-branch-2.7.001.patch, > YARN-6493-branch-2.7.002.patch, YARN-6493-branch-2.8.001.patch, > YARN-6493-branch-2.8.002.patch, YARN-6493-branch-2.8.003.patch > > > It would be useful to have the node's partition when logging a container > allocation, for tracking purposes. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6547) Enhance SLS-based tests leveraging invariant checker
[ https://issues.apache.org/jira/browse/YARN-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014696#comment-16014696 ] Wangda Tan commented on YARN-6547: -- Thanks [~curino], I tried to run tests with the patch, all passed except RUMEN. Only one comment: Could you move changes to JvmMetrics to a separate public static methods like reregister, etc. and mark it to VisibleToTest? I'm not sure if the changes is safe or not, moving such test-only code to a separate methods should be better. Beyond that, patch looks good. > Enhance SLS-based tests leveraging invariant checker > > > Key: YARN-6547 > URL: https://issues.apache.org/jira/browse/YARN-6547 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-6547.v0.patch, YARN-6547.v1.patch > > > We can leverage {{InvariantChecker}}s to provide a more thorough validation > of SLS-based tests. This patch introduces invariants checking during and at > the end of the run. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6617) Services API delete call first attempt usually fails
[ https://issues.apache.org/jira/browse/YARN-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-6617: -- Attachment: YARN-6617-yarn-native-services.1.patch Made the actionStop wait for a few seconds and then do a force kill. Also fixed an issue regarding specifying the queue name > Services API delete call first attempt usually fails > > > Key: YARN-6617 > URL: https://issues.apache.org/jira/browse/YARN-6617 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Jian He > Fix For: yarn-native-services > > Attachments: YARN-6617-yarn-native-services.1.patch > > > The services API is calling actionStop, which queues a stop action, > immediately followed by actionDestroy, which fails because the app is still > running. The actionStop method is ignoring the force option, so one solution > would be to reintroduce handling for the force option and have the services > API set this option. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-6617) Services API delete call first attempt usually fails
[ https://issues.apache.org/jira/browse/YARN-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He reassigned YARN-6617: - Assignee: Jian He > Services API delete call first attempt usually fails > > > Key: YARN-6617 > URL: https://issues.apache.org/jira/browse/YARN-6617 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Jian He > Fix For: yarn-native-services > > > The services API is calling actionStop, which queues a stop action, > immediately followed by actionDestroy, which fails because the app is still > running. The actionStop method is ignoring the force option, so one solution > would be to reintroduce handling for the force option and have the services > API set this option. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5949) Add pluggable configuration policy interface as a component of MutableCSConfigurationProvider
[ https://issues.apache.org/jira/browse/YARN-5949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014617#comment-16014617 ] Wangda Tan commented on YARN-5949: -- Latest patch LGTM, will commit tomorrow if no opposite opinions. Thanks [~jhung]. > Add pluggable configuration policy interface as a component of > MutableCSConfigurationProvider > - > > Key: YARN-5949 > URL: https://issues.apache.org/jira/browse/YARN-5949 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jonathan Hung >Assignee: Jonathan Hung > Attachments: YARN-5949-YARN-5734.001.patch, > YARN-5949-YARN-5734.002.patch, YARN-5949-YARN-5734.003.patch, > YARN-5949-YARN-5734.004.patch, YARN-5949-YARN-5734.005.patch > > > This will allow different policies to customize how/if configuration changes > should be applied (for example, a policy might restrict whether a > configuration change by a certain user is allowed). This will be enforced by > the MutableCSConfigurationProvider. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6223) [Umbrella] Natively support GPU configuration/discovery/scheduling/isolation on YARN
[ https://issues.apache.org/jira/browse/YARN-6223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014609#comment-16014609 ] Wangda Tan commented on YARN-6223: -- Opened YARN-6620 to track code works and reviews for GPU configuration/isolation. > [Umbrella] Natively support GPU configuration/discovery/scheduling/isolation > on YARN > > > Key: YARN-6223 > URL: https://issues.apache.org/jira/browse/YARN-6223 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Wangda Tan >Assignee: Wangda Tan > Attachments: YARN-6223.Natively-support-GPU-on-YARN-v1.pdf, > YARN-6223.wip.1.patch > > > As varieties of workloads are moving to YARN, including machine learning / > deep learning which can speed up by leveraging GPU computation power. > Workloads should be able to request GPU from YARN as simple as CPU and memory. > *To make a complete GPU story, we should support following pieces:* > 1) GPU discovery/configuration: Admin can either config GPU resources and > architectures on each node, or more advanced, NodeManager can automatically > discover GPU resources and architectures and report to ResourceManager > 2) GPU scheduling: YARN scheduler should account GPU as a resource type just > like CPU and memory. > 3) GPU isolation/monitoring: once launch a task with GPU resources, > NodeManager should properly isolate and monitor task's resource usage. > For #2, YARN-3926 can support it natively. For #3, YARN-3611 has introduced > an extensible framework to support isolation for different resource types and > different runtimes. > *Related JIRAs:* > There're a couple of JIRAs (YARN-4122/YARN-5517) filed with similar goals but > different solutions: > For scheduling: > - YARN-4122/YARN-5517 are all adding a new GPU resource type to Resource > protocol instead of leveraging YARN-3926. > For isolation: > - And YARN-4122 proposed to use CGroups to do isolation which cannot solve > the problem listed at > https://github.com/NVIDIA/nvidia-docker/wiki/GPU-isolation#challenges such as > minor device number mapping; load nvidia_uvm module; mismatch of CUDA/driver > versions, etc. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-6620) [YARN-6223] Support GPU Configuration and Isolation in CGroups.
[ https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan reassigned YARN-6620: Assignee: Wangda Tan > [YARN-6223] Support GPU Configuration and Isolation in CGroups. > --- > > Key: YARN-6620 > URL: https://issues.apache.org/jira/browse/YARN-6620 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan > Attachments: YARN-6620.001.patch > > > This JIRA plan to add support of: > 1) GPU configuration for NodeManagers > 2) Isolation in CGroups. > This is the minimal requirements to support GPU on YARN. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6620) [YARN-6223] Support GPU Configuration and Isolation in CGroups.
[ https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-6620: - Attachment: YARN-6620.001.patch Attached ver.1 patch, since YARN-3926 is not merged yet, now #requested GPUs can be specified in environment. This is only for test purposes, will not commit this code before YARN-3926 get merged. [~jianhe]/[~vvasudev], could you help to check the patch? > [YARN-6223] Support GPU Configuration and Isolation in CGroups. > --- > > Key: YARN-6620 > URL: https://issues.apache.org/jira/browse/YARN-6620 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan > Attachments: YARN-6620.001.patch > > > This JIRA plan to add support of: > 1) GPU configuration for NodeManagers > 2) Isolation in CGroups. > This is the minimal requirements to support GPU on YARN. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6620) [YARN-6223] Support GPU Configuration and Isolation in CGroups.
[ https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-6620: - Description: This JIRA plan to add support of: 1) GPU configuration for NodeManagers 2) Isolation in CGroups. This is the minimal requirements to support GPU on YARN. > [YARN-6223] Support GPU Configuration and Isolation in CGroups. > --- > > Key: YARN-6620 > URL: https://issues.apache.org/jira/browse/YARN-6620 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan > > This JIRA plan to add support of: > 1) GPU configuration for NodeManagers > 2) Isolation in CGroups. > This is the minimal requirements to support GPU on YARN. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6620) [YARN-6223] Support GPU Configuration and Isolation in CGroups.
Wangda Tan created YARN-6620: Summary: [YARN-6223] Support GPU Configuration and Isolation in CGroups. Key: YARN-6620 URL: https://issues.apache.org/jira/browse/YARN-6620 Project: Hadoop YARN Issue Type: Sub-task Reporter: Wangda Tan -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects
Arun Suresh created YARN-6619: - Summary: AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects Key: YARN-6619 URL: https://issues.apache.org/jira/browse/YARN-6619 Project: Hadoop YARN Issue Type: Sub-task Reporter: Arun Suresh Assignee: Panagiotis Garefalakis Opening this JIRA to track changes needed in the AMRMClient to incorporate the PlacementConstraint and SchedulingRequest objects -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6547) Enhance SLS-based tests leveraging invariant checker
[ https://issues.apache.org/jira/browse/YARN-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014579#comment-16014579 ] Hadoop QA commented on YARN-6547: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 41s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 48s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 56s{color} | {color:orange} root: The patch generated 6 new + 22 unchanged - 0 fixed = 28 total (was 22) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 47s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 44s{color} | {color:red} hadoop-sls in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}100m 23s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.sls.TestSLSRunner | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6547 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12868558/YARN-6547.v1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 8c35de6cfa25 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 035d468 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/15949/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Bui
[jira] [Commented] (YARN-6587) Refactor of ResourceManager#startWebApp in a Util class
[ https://issues.apache.org/jira/browse/YARN-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014565#comment-16014565 ] Carlo Curino commented on YARN-6587: I had to re-open the issue, for jenkins to pick up the branch-2 patch. > Refactor of ResourceManager#startWebApp in a Util class > --- > > Key: YARN-6587 > URL: https://issues.apache.org/jira/browse/YARN-6587 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Fix For: 3.0.0-alpha3 > > Attachments: YARN-6587-branch-2.v1.patch, YARN-6587.v1.patch, > YARN-6587.v2.patch > > > This jira tracks the refactor of ResourceManager#startWebApp in a util class > since Router in YARN-5412 has to implement the same logic for Filtering and > Authentication. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Reopened] (YARN-6587) Refactor of ResourceManager#startWebApp in a Util class
[ https://issues.apache.org/jira/browse/YARN-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino reopened YARN-6587: > Refactor of ResourceManager#startWebApp in a Util class > --- > > Key: YARN-6587 > URL: https://issues.apache.org/jira/browse/YARN-6587 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Fix For: 3.0.0-alpha3 > > Attachments: YARN-6587-branch-2.v1.patch, YARN-6587.v1.patch, > YARN-6587.v2.patch > > > This jira tracks the refactor of ResourceManager#startWebApp in a util class > since Router in YARN-5412 has to implement the same logic for Filtering and > Authentication. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6608) Backport all SLS improvements from trunk to branch-2
[ https://issues.apache.org/jira/browse/YARN-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014561#comment-16014561 ] Carlo Curino commented on YARN-6608: Actually, it took a bit more (idea was caching stuff and making it work). On top of the previous work, I had to pull back a few classes (from rumen and invariant checkers), and bypass some other compilation issues. Patch v1 should compile and pass basic SLS tests (bar YARN-6111 rumen drama). > Backport all SLS improvements from trunk to branch-2 > > > Key: YARN-6608 > URL: https://issues.apache.org/jira/browse/YARN-6608 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.9.0 >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-6608-branch-2.v0.patch, YARN-6608-branch-2.v1.patch > > > The SLS has received lots of attention in trunk, but only some of it made it > back to branch-2. This patch is a "raw" fork-lift of the trunk development > from hadoop-tools/hadoop-sls. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6608) Backport all SLS improvements from trunk to branch-2
[ https://issues.apache.org/jira/browse/YARN-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-6608: --- Attachment: YARN-6608-branch-2.v1.patch > Backport all SLS improvements from trunk to branch-2 > > > Key: YARN-6608 > URL: https://issues.apache.org/jira/browse/YARN-6608 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.9.0 >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-6608-branch-2.v0.patch, YARN-6608-branch-2.v1.patch > > > The SLS has received lots of attention in trunk, but only some of it made it > back to branch-2. This patch is a "raw" fork-lift of the trunk development > from hadoop-tools/hadoop-sls. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6615) AmIpFilter drops query parameters on redirect
[ https://issues.apache.org/jira/browse/YARN-6615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014513#comment-16014513 ] Ruslan Dautkhanov commented on YARN-6615: - Thank you [~wilfreds]. Is this possible to backport this patch to Hadoop 2.6 as well? > AmIpFilter drops query parameters on redirect > - > > Key: YARN-6615 > URL: https://issues.apache.org/jira/browse/YARN-6615 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha2 >Reporter: Wilfred Spiegelenburg >Assignee: Wilfred Spiegelenburg > Attachments: YARN-6615.1.patch > > > When an AM web request is redirected to the RM the query parameters are > dropped from the web request. > This happens for Spark as described in SPARK-20772. > The repro steps are: > - Start up the spark-shell in yarn mode and run a job > - Try to access the job details through http://:4040/jobs/job?id=0 > - A HTTP ERROR 400 is thrown (requirement failed: missing id parameter) > This works fine in local or standalone mode, but does not work on Yarn where > the query parameter is dropped. If the UI filter > org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter is removed from > the config which shows that the problem is in the filter -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6593) [API] Introduce Placement Constraint object
[ https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014435#comment-16014435 ] Konstantinos Karanasos edited comment on YARN-6593 at 5/17/17 5:34 PM: --- Attaching first patch for the constraints API. [~leftnoteasy], [~asuresh], [~pg1...@imperial.ac.uk], please give it a first look. was (Author: kkaranasos): Attaching first patch for the constraints API. [~leftnoteasy], [~asuresh], please give it a first look. > [API] Introduce Placement Constraint object > --- > > Key: YARN-6593 > URL: https://issues.apache.org/jira/browse/YARN-6593 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > Attachments: YARN-6593.001.patch > > > This JIRA introduces an object for defining placement constraints. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6594) [API] Introduce SchedulingRequest object
[ https://issues.apache.org/jira/browse/YARN-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantinos Karanasos updated YARN-6594: - Attachment: YARN-6594.001.patch Attaching first version of the patch. It has to be applied on top of the YARN-6593 patch. +[~leftnoteasy],[~asuresh]. > [API] Introduce SchedulingRequest object > > > Key: YARN-6594 > URL: https://issues.apache.org/jira/browse/YARN-6594 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > Attachments: YARN-6594.001.patch > > > This JIRA introduces a new SchedulingRequest object. > It will be part of the {{AllocateRequest}} and will be used to define sizing > (e.g., number of allocations, size of allocations) and placement constraints > for allocations. > Applications can use either this new object (when rich placement constraints > are required) or the existing {{ResourceRequest}} object. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6593) [API] Introduce Placement Constraint object
[ https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantinos Karanasos updated YARN-6593: - Attachment: YARN-6593.001.patch Attaching first patch for the constraints API. [~leftnoteasy], [~asuresh], please give it a first look. > [API] Introduce Placement Constraint object > --- > > Key: YARN-6593 > URL: https://issues.apache.org/jira/browse/YARN-6593 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > Attachments: YARN-6593.001.patch > > > This JIRA introduces an object for defining placement constraints. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6547) Enhance SLS-based tests leveraging invariant checker
[ https://issues.apache.org/jira/browse/YARN-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014403#comment-16014403 ] Carlo Curino commented on YARN-6547: Hi [~wangda], thanks for looking at this. I updated the patch (v1). Issue #1: Does not happen for me on proper runs (was happening due to other failures. Since the test is parametrized things might have bled across instances on failures), seems ok now. Issue #2: This is related to YARN-6111, and to what I was mentioning in my first comment. We should finesse a set of invariants that are tight, yet robust. This should be a combination of traces, timeouts, and exit invariants for which we have good confidence (I need your help for this). For now I loosen up the {{exit-invariants.txt}} so it doesn't scream at RUMEN runs, but we should follow up and tightening where possible (maybe after YARN-6111 is addressed). Issue #3: This is a parametrized test issue (separate runs were fine), the stop/restart interacts poorly with MetricsSystem registration process (as the jvmMetrics objects persists across, so is not re-registered). I fixed it by ensuring that even if JvmMetrics already exist, we check whether it should be re-registered to the MetricsSystem. (Please double-check that code). > Enhance SLS-based tests leveraging invariant checker > > > Key: YARN-6547 > URL: https://issues.apache.org/jira/browse/YARN-6547 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-6547.v0.patch, YARN-6547.v1.patch > > > We can leverage {{InvariantChecker}}s to provide a more thorough validation > of SLS-based tests. This patch introduces invariants checking during and at > the end of the run. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6160) Create an agent-less docker-less provider in the native services framework
[ https://issues.apache.org/jira/browse/YARN-6160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014387#comment-16014387 ] Jian He commented on YARN-6160: --- sounds good, I'll commit the current patch > Create an agent-less docker-less provider in the native services framework > -- > > Key: YARN-6160 > URL: https://issues.apache.org/jira/browse/YARN-6160 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Billie Rinaldi > Fix For: yarn-native-services > > Attachments: YARN-6160-yarn-native-services.001.patch, > YARN-6160-yarn-native-services.002.patch, > YARN-6160-yarn-native-services.003.patch > > > The goal of the agent-less docker-less provider is to be able to use the YARN > native services framework when Docker is not installed or other methods of > app resource installation are preferable. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6547) Enhance SLS-based tests leveraging invariant checker
[ https://issues.apache.org/jira/browse/YARN-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-6547: --- Attachment: YARN-6547.v1.patch > Enhance SLS-based tests leveraging invariant checker > > > Key: YARN-6547 > URL: https://issues.apache.org/jira/browse/YARN-6547 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-6547.v0.patch, YARN-6547.v1.patch > > > We can leverage {{InvariantChecker}}s to provide a more thorough validation > of SLS-based tests. This patch introduces invariants checking during and at > the end of the run. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6615) AmIpFilter drops query parameters on redirect
[ https://issues.apache.org/jira/browse/YARN-6615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated YARN-6615: - Affects Version/s: 2.8.0 2.7.3 2.6.5 Target Version/s: 2.7.4, 2.8.1 Component/s: (was: amrmproxy) Thanks for the report and patch! amrmproxy refers to a component of YARN federation and is unrelated, so I removed it. I also updated the affected versions since this appears to be a long-standing bug and not just in the most recent 3.x alpha release. +1 for the patch. Would you mind providing a patch for branch-2.8 as well? I think that patch should apply to branch-2.7 as well since those two branches have the same code for the two files involved. > AmIpFilter drops query parameters on redirect > - > > Key: YARN-6615 > URL: https://issues.apache.org/jira/browse/YARN-6615 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha2 >Reporter: Wilfred Spiegelenburg >Assignee: Wilfred Spiegelenburg > Attachments: YARN-6615.1.patch > > > When an AM web request is redirected to the RM the query parameters are > dropped from the web request. > This happens for Spark as described in SPARK-20772. > The repro steps are: > - Start up the spark-shell in yarn mode and run a job > - Try to access the job details through http://:4040/jobs/job?id=0 > - A HTTP ERROR 400 is thrown (requirement failed: missing id parameter) > This works fine in local or standalone mode, but does not work on Yarn where > the query parameter is dropped. If the UI filter > org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter is removed from > the config which shows that the problem is in the filter -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6618) TestNMLeveldbStateStoreService#testCompactionCycle can fail if compaction occurs more than once
[ https://issues.apache.org/jira/browse/YARN-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014308#comment-16014308 ] Hadoop QA commented on YARN-6618: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 45s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 5 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 17s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 34m 41s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6618 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12868546/YARN-6618.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux b8d357e4fe2d 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 035d468 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/15948/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15948/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15948/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestNMLeveldbStateStoreService#testCompactionCycle can fail if compaction > occurs more than once > ---
[jira] [Commented] (YARN-6618) TestNMLeveldbStateStoreService#testCompactionCycle can fail if compaction occurs more than once
[ https://issues.apache.org/jira/browse/YARN-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014229#comment-16014229 ] Eric Badger commented on YARN-6618: --- Fix lgtm. +1 (non-binding) pending hadoopqa > TestNMLeveldbStateStoreService#testCompactionCycle can fail if compaction > occurs more than once > --- > > Key: YARN-6618 > URL: https://issues.apache.org/jira/browse/YARN-6618 > Project: Hadoop YARN > Issue Type: Bug > Components: test >Affects Versions: 2.8.0 >Reporter: Jason Lowe >Assignee: Jason Lowe >Priority: Minor > Attachments: YARN-6618.001.patch > > > The testCompactionCycle unit test is verifying that the compaction cycle > occurs after startup, but rarely the compaction cycle can occur more than > once which fails the test. The unit test needs to account for this. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6613) Update json validation for new native services providers
[ https://issues.apache.org/jira/browse/YARN-6613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Billie Rinaldi updated YARN-6613: - Attachment: YARN-6613-yarn-native-services.002.patch > Update json validation for new native services providers > > > Key: YARN-6613 > URL: https://issues.apache.org/jira/browse/YARN-6613 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Billie Rinaldi > Fix For: yarn-native-services > > Attachments: YARN-6613-yarn-native-services.001.patch, > YARN-6613-yarn-native-services.002.patch > > > YARN-6160 started some work enabling different validation for each native > services provider. The validation done in > ServiceApiUtil#validateApplicationPayload needs to updated accordingly. This > validation should also be updated to handle the APPLICATION artifact type, > which does not have an associated provider. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6618) TestNMLeveldbStateStoreService#testCompactionCycle can fail if compaction occurs more than once
[ https://issues.apache.org/jira/browse/YARN-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated YARN-6618: - Attachment: YARN-6618.001.patch Simple fix, just need to specify atLeastOnce() on the timeout so we can accept more than one invocation. > TestNMLeveldbStateStoreService#testCompactionCycle can fail if compaction > occurs more than once > --- > > Key: YARN-6618 > URL: https://issues.apache.org/jira/browse/YARN-6618 > Project: Hadoop YARN > Issue Type: Bug > Components: test >Affects Versions: 2.8.0 >Reporter: Jason Lowe >Assignee: Jason Lowe > Attachments: YARN-6618.001.patch > > > The testCompactionCycle unit test is verifying that the compaction cycle > occurs after startup, but rarely the compaction cycle can occur more than > once which fails the test. The unit test needs to account for this. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6618) TestNMLeveldbStateStoreService#testCompactionCycle can fail if compaction occurs more than once
[ https://issues.apache.org/jira/browse/YARN-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated YARN-6618: - Priority: Minor (was: Major) > TestNMLeveldbStateStoreService#testCompactionCycle can fail if compaction > occurs more than once > --- > > Key: YARN-6618 > URL: https://issues.apache.org/jira/browse/YARN-6618 > Project: Hadoop YARN > Issue Type: Bug > Components: test >Affects Versions: 2.8.0 >Reporter: Jason Lowe >Assignee: Jason Lowe >Priority: Minor > Attachments: YARN-6618.001.patch > > > The testCompactionCycle unit test is verifying that the compaction cycle > occurs after startup, but rarely the compaction cycle can occur more than > once which fails the test. The unit test needs to account for this. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6618) TestNMLeveldbStateStoreService#testCompactionCycle can fail if compaction occurs more than once
Jason Lowe created YARN-6618: Summary: TestNMLeveldbStateStoreService#testCompactionCycle can fail if compaction occurs more than once Key: YARN-6618 URL: https://issues.apache.org/jira/browse/YARN-6618 Project: Hadoop YARN Issue Type: Bug Components: test Affects Versions: 2.8.0 Reporter: Jason Lowe Assignee: Jason Lowe The testCompactionCycle unit test is verifying that the compaction cycle occurs after startup, but rarely the compaction cycle can occur more than once which fails the test. The unit test needs to account for this. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6617) Services API delete call first attempt usually fails
Billie Rinaldi created YARN-6617: Summary: Services API delete call first attempt usually fails Key: YARN-6617 URL: https://issues.apache.org/jira/browse/YARN-6617 Project: Hadoop YARN Issue Type: Sub-task Reporter: Billie Rinaldi Fix For: yarn-native-services The services API is calling actionStop, which queues a stop action, immediately followed by actionDestroy, which fails because the app is still running. The actionStop method is ignoring the force option, so one solution would be to reintroduce handling for the force option and have the services API set this option. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6160) Create an agent-less docker-less provider in the native services framework
[ https://issues.apache.org/jira/browse/YARN-6160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014063#comment-16014063 ] Billie Rinaldi commented on YARN-6160: -- I think we will need to fix the client install. For a tarball, client install would download and untar the tarball, then create config files needed for a client installation of the app. For a docker app, maybe client install could create the client config files, then print out a sample docker run command that uses the config files to run an app client. > Create an agent-less docker-less provider in the native services framework > -- > > Key: YARN-6160 > URL: https://issues.apache.org/jira/browse/YARN-6160 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Billie Rinaldi > Fix For: yarn-native-services > > Attachments: YARN-6160-yarn-native-services.001.patch, > YARN-6160-yarn-native-services.002.patch, > YARN-6160-yarn-native-services.003.patch > > > The goal of the agent-less docker-less provider is to be able to use the YARN > native services framework when Docker is not installed or other methods of > app resource installation are preferable. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6615) AmIpFilter drops query parameters on redirect
[ https://issues.apache.org/jira/browse/YARN-6615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16013967#comment-16013967 ] Hadoop QA commented on YARN-6615: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 17m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6615 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12868515/YARN-6615.1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux d6b735b982f8 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 035d468 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15947/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15947/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > AmIpFilter drops query parameters on redirect > - > > Key: YARN-6615 > URL: https://issues.apache.org/jira/browse/YARN-6615 > Project: Hadoop YARN > Issue Type: Bug > Components: amrmproxy >Affects Versions: 3.0.0-alpha2 >Reporter: Wilfred Spiegelenburg >Assignee: Wilfred Spiegelenburg > Attachments: YARN-6615.1.patch > > > When an AM web request is redirected to the RM the
[jira] [Updated] (YARN-6615) AmIpFilter drops query parameters on redirect
[ https://issues.apache.org/jira/browse/YARN-6615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wilfred Spiegelenburg updated YARN-6615: Attachment: YARN-6615.1.patch Adding the query parameters if set before we trigger the redirect including a new test > AmIpFilter drops query parameters on redirect > - > > Key: YARN-6615 > URL: https://issues.apache.org/jira/browse/YARN-6615 > Project: Hadoop YARN > Issue Type: Bug > Components: amrmproxy >Affects Versions: 3.0.0-alpha2 >Reporter: Wilfred Spiegelenburg >Assignee: Wilfred Spiegelenburg > Attachments: YARN-6615.1.patch > > > When an AM web request is redirected to the RM the query parameters are > dropped from the web request. > This happens for Spark as described in SPARK-20772. > The repro steps are: > - Start up the spark-shell in yarn mode and run a job > - Try to access the job details through http://:4040/jobs/job?id=0 > - A HTTP ERROR 400 is thrown (requirement failed: missing id parameter) > This works fine in local or standalone mode, but does not work on Yarn where > the query parameter is dropped. If the UI filter > org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter is removed from > the config which shows that the problem is in the filter -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6616) YARN AHS shows submitTime for jobs same as startTime
Prabhu Joseph created YARN-6616: --- Summary: YARN AHS shows submitTime for jobs same as startTime Key: YARN-6616 URL: https://issues.apache.org/jira/browse/YARN-6616 Project: Hadoop YARN Issue Type: Bug Affects Versions: 2.7.3 Reporter: Prabhu Joseph Assignee: Prabhu Joseph Priority: Minor YARN AHS returns startTime value for both submitTime and startTime for the jobs. Looks the code sets the submitTime with startTime value. https://github.com/apache/hadoop/blob/branch-2.7.3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/dao/AppInfo.java#L80 {code} curl --negotiate -u: http://prabhuzeppelin3.openstacklocal:8188/ws/v1/applicationhistory/apps 149501553757414950155375741495016384084 {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate
[ https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-6610: -- Issue Type: Sub-task (was: Bug) Parent: YARN-3926 > DominantResourceCalculator.getResourceAsValue() dominant param is no longer > appropriate > --- > > Key: YARN-6610 > URL: https://issues.apache.org/jira/browse/YARN-6610 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Affects Versions: YARN-3926 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Attachments: YARN-6610.001.patch > > > The {{dominant}} param assumes there are only two resources, i.e. true means > to compare the dominant, and false means to compare the subordinate. Now > that there are _n_ resources, this parameter no longer makes sense. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate
[ https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16013745#comment-16013745 ] Sunil G commented on YARN-6610: --- Converted as a sub-task under YANR-3926, Please revert if any issues. > DominantResourceCalculator.getResourceAsValue() dominant param is no longer > appropriate > --- > > Key: YARN-6610 > URL: https://issues.apache.org/jira/browse/YARN-6610 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Affects Versions: YARN-3926 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Attachments: YARN-6610.001.patch > > > The {{dominant}} param assumes there are only two resources, i.e. true means > to compare the dominant, and false means to compare the subordinate. Now > that there are _n_ resources, this parameter no longer makes sense. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6560) SLS doesn't honor node total resource specified in sls-runner.xml
[ https://issues.apache.org/jira/browse/YARN-6560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16013698#comment-16013698 ] Sunil G commented on YARN-6560: --- Patch looks fine for me as well. I ran TestSLSRunner manually. Will commit later today if no objections. > SLS doesn't honor node total resource specified in sls-runner.xml > - > > Key: YARN-6560 > URL: https://issues.apache.org/jira/browse/YARN-6560 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan > Attachments: YARN-6560.1.patch, YARN-6560.2.patch, YARN-6560.3.patch > > > Now SLSRunner extends ToolRunner, so setConf will be called twice: once in > the init() of SLSRunner and once in ToolRunner. The later one will overwrite > the previous one so it won't correctly load sls-runner.xml -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue
[ https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16013695#comment-16013695 ] Sunil G commented on YARN-2113: --- javadoc warning is valid. I will update in next patch, but will wait for a round of review from [~eepayne] and [~leftnoteasy] for updating it. > Add cross-user preemption within CapacityScheduler's leaf-queue > --- > > Key: YARN-2113 > URL: https://issues.apache.org/jira/browse/YARN-2113 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler >Reporter: Vinod Kumar Vavilapalli >Assignee: Sunil G > Attachments: IntraQueue Preemption-Impact Analysis.pdf, > TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt, > YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, > YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, > YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, > YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, > YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, > YARN-2113.0016.patch, YARN-2113.0017.patch, YARN-2113.0018.patch, > YARN-2113.apply.onto.0012.ericp.patch, YARN-2113 Intra-QueuePreemption > Behavior.pdf, YARN-2113.v0.patch > > > Preemption today only works across queues and moves around resources across > queues per demand and usage. We should also have user-level preemption within > a queue, to balance capacity across users in a predictable manner. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6378) Negative usedResources memory in CapacityScheduler
[ https://issues.apache.org/jira/browse/YARN-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16013652#comment-16013652 ] powerinf commented on YARN-6378: hadoop 2.6.3 has the same problem ?? > Negative usedResources memory in CapacityScheduler > -- > > Key: YARN-6378 > URL: https://issues.apache.org/jira/browse/YARN-6378 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler, resourcemanager >Affects Versions: 2.7.2 >Reporter: Ravi Prakash >Assignee: Ravi Prakash > > Courtesy Thomas Nystrand, we found that on two of our clusters configured > with the CapacityScheduler, usedResources occasionally becomes negative. > e.g. > {code} > 2017-03-15 11:10:09,449 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: > assignedContainer application attempt=appattempt_1487222361993_17177_01 > container=Container: [ContainerId: container_1487222361993_17177_01_14, > NodeId: :27249, NodeHttpAddress: :8042, Resource: > , Priority: 2, Token: null, ] queue=: > capacity=0.2, absoluteCapacity=0.2, usedResources=, > usedCapacity=0.03409091, absoluteUsedCapacity=0.006818182, numApps=1, > numContainers=3 clusterResource= type=RACK_LOCAL > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org