[jira] [Commented] (YARN-8529) Add timeout to RouterWebServiceUtil#invokeRMWebService
[ https://issues.apache.org/jira/browse/YARN-8529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17047221#comment-17047221 ] Minni Mittal commented on YARN-8529: [~bibinchundatt] [~elgoiri] Can you please review the patch ? > Add timeout to RouterWebServiceUtil#invokeRMWebService > -- > > Key: YARN-8529 > URL: https://issues.apache.org/jira/browse/YARN-8529 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Íñigo Goiri >Assignee: Minni Mittal >Priority: Major > Attachments: YARN-8529.v1.patch, YARN-8529.v2.patch, > YARN-8529.v3.patch, YARN-8529.v4.patch, YARN-8529.v5.patch, > YARN-8529.v6.patch, YARN-8529.v7.patch, YARN-8529.v8.patch, YARN-8529.v9.patch > > > {{RouterWebServiceUtil#invokeRMWebService}} currently has a fixed timeout. > This should be configurable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10166) Add detail log for ApplicationAttemptNotFoundException
[ https://issues.apache.org/jira/browse/YARN-10166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17047204#comment-17047204 ] Youquan Lin commented on YARN-10166: [~hadoopqa] [~adam.antal] One test unit errors have nothing to do with my changes. Can you tell me what to do next? > Add detail log for ApplicationAttemptNotFoundException > -- > > Key: YARN-10166 > URL: https://issues.apache.org/jira/browse/YARN-10166 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Reporter: Youquan Lin >Priority: Minor > Labels: patch > Attachments: YARN-10166-001.patch, YARN-10166-002.patch > > > Suppose user A killed the app, then ApplicationMasterService will call > unregisterAttempt() for this app. Sometimes, app's AM continues to call the > alloate() method and reports an error as follows. > {code:java} > Application attempt appattempt_1582520281010_15271_01 doesn't exist in > ApplicationMasterService cache. > {code} > If user B has been watching the AM log, he will be confused why the > attempt is no longer in the ApplicationMasterService cache. So I think we can > add detail log for ApplicationAttemptNotFoundException as follows. > {code:java} > Application attempt appattempt_1582630210671_14658_01 doesn't exist in > ApplicationMasterService cache.App state: KILLED,finalStatus: KILLED > ,diagnostics: App application_1582630210671_14658 killed by userA from > 127.0.0.1 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6214) NullPointer Exception while querying timeline server API
[ https://issues.apache.org/jira/browse/YARN-6214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17047125#comment-17047125 ] Benjamin Kim commented on YARN-6214: It happened to me, {code:java} {"exception": "NullPointerException","javaClassName": "java.lang.NullPointerException"}{code} Using 2.8.4, as Jason noted, it happens while checking app types. {code:java} 2020-02-28 09:52:20,041 WARN org.apache.hadoop.yarn.webapp.GenericExceptionHandler (2070044461@qtp-1305004711-22): INTERNAL_SERVER_ERROR2020-02-28 09:52:20,041 WARN org.apache.hadoop.yarn.webapp.GenericExceptionHandler (2070044461@qtp-1305004711-22): INTERNAL_SERVER_ERRORjava.lang.NullPointerException at org.apache.hadoop.yarn.server.webapp.WebServices.getApps(WebServices.java:199) at org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.AHSWebServices.getApps(AHSWebServices.java:96) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537) at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:886) at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834) at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795) at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163) at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58) at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118) at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:294) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.security.http.CrossOriginFilter.doFilter(CrossOriginFilter.java:95) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1353) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at
[jira] [Commented] (YARN-10110) In Federation Secure cluster Application submission fails when authorization is enabled
[ https://issues.apache.org/jira/browse/YARN-10110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17047099#comment-17047099 ] Hadoop QA commented on YARN-10110: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 54s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 20m 43s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 48s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 14s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 4 new + 17 unchanged - 1 fixed = 21 total (was 18) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 6s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}213m 48s{color} | {color:red} hadoop-yarn in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 33s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 21s{color} | {color:green} hadoop-yarn-server-router in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 3s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}401m 39s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests |
[jira] [Commented] (YARN-2710) RM HA tests failed intermittently on trunk
[ https://issues.apache.org/jira/browse/YARN-2710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17047098#comment-17047098 ] Hadoop QA commented on YARN-2710: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 19s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 1 new + 13 unchanged - 1 fixed = 14 total (was 14) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 59s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 25m 44s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 84m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-2710 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12994823/YARN-2710.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7af364f6c898 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a43510e | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/25601/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25601/testReport/ | | Max. process+thread count | 531 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client | | Console output |
[jira] [Commented] (YARN-2710) RM HA tests failed intermittently on trunk
[ https://issues.apache.org/jira/browse/YARN-2710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17047095#comment-17047095 ] Hadoop QA commented on YARN-2710: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 56s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} branch-2.10 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 16s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s{color} | {color:green} branch-2.10 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} branch-2.10 passed with JDK v1.8.0_242 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 38s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} branch-2.10 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} branch-2.10 passed with JDK v1.8.0_242 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} the patch passed with JDK v1.8.0_242 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 14s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 2 new + 23 unchanged - 2 fixed = 25 total (was 25) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed with JDK v1.8.0_242 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 44s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 62m 4s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.client.api.impl.TestAMRMProxy | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:a969cad0a12 | | JIRA Issue | YARN-2710 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12994826/YARN-2710-branch-2.10.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 76a4c4194f40 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-2.10 / 2d44d7f | | maven |
[jira] [Commented] (YARN-10174) Add colored policies to enable manual load balancing across sub clusters
[ https://issues.apache.org/jira/browse/YARN-10174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17047090#comment-17047090 ] Young Chen commented on YARN-10174: --- This patch will provide a new type of policy for AMRM and Router. This policy will load different weights based on the provided "color" parameter for a job. Using this mechanism we can reroute containers to sub clusters where the job is allowed. This feature can be used to create resource isolated subclusters (e.g. adhoc, SLA, experimental, etc.) that will not interfere with each other. Additionally, this may prove useful when sub clusters are configured differently, whether that's the RM heartbeat interval, cluster size, or machine capabilities. > Add colored policies to enable manual load balancing across sub clusters > > > Key: YARN-10174 > URL: https://issues.apache.org/jira/browse/YARN-10174 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Young Chen >Assignee: Young Chen >Priority: Major > > Add colored policies to enable manual load balancing across sub clusters -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-10174) Add colored policies to enable manual load balancing across sub clusters
Young Chen created YARN-10174: - Summary: Add colored policies to enable manual load balancing across sub clusters Key: YARN-10174 URL: https://issues.apache.org/jira/browse/YARN-10174 Project: Hadoop YARN Issue Type: Sub-task Reporter: Young Chen Assignee: Young Chen Add colored policies to enable manual load balancing across sub clusters -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-2710) RM HA tests failed intermittently on trunk
[ https://issues.apache.org/jira/browse/YARN-2710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17047053#comment-17047053 ] Ahmed Hussein commented on YARN-2710: - In addition to the error described above, I found that the tests may timeout on slower machines. A timeout of 15000 is small to allow more retries to register. So, I changed the timeouts and increased the retry count to 10. {code:bash} [INFO] --- maven-surefire-plugin:3.0.0-M1:test (default-test) @ hadoop-yarn-client --- [INFO] [INFO] --- [INFO] T E S T S [INFO] --- [INFO] Running org.apache.hadoop.yarn.client.TestResourceTrackerOnHA [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 19.612 s <<< FAILURE! - in org.apache.hadoop.yarn.client.TestResourceTrackerOnHA [ERROR] testResourceTrackerOnHA(org.apache.hadoop.yarn.client.TestResourceTrackerOnHA) Time elapsed: 19.473 s <<< ERROR! org.junit.runners.model.TestTimedOutException: test timed out after 15000 milliseconds at sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) at sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) at sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:336) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:203) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:699) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:812) at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:413) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1636) at org.apache.hadoop.ipc.Client.call(Client.java:1452) at org.apache.hadoop.ipc.Client.call(Client.java:1405) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy93.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:73) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy94.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.client.TestResourceTrackerOnHA.testResourceTrackerOnHA(TestResourceTrackerOnHA.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:80) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) [INFO] [INFO] Results: [INFO] [ERROR] Errors: [ERROR] TestResourceTrackerOnHA.testResourceTrackerOnHA:64 » TestTimedOut test timed o... [INFO] [ERROR] Tests run: 1,
[jira] [Updated] (YARN-2710) RM HA tests failed intermittently on trunk
[ https://issues.apache.org/jira/browse/YARN-2710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated YARN-2710: Attachment: YARN-2710-branch-2.10.001.patch > RM HA tests failed intermittently on trunk > -- > > Key: YARN-2710 > URL: https://issues.apache.org/jira/browse/YARN-2710 > Project: Hadoop YARN > Issue Type: Bug > Components: client > Environment: Java 8, jenkins >Reporter: Wangda Tan >Assignee: Ahmed Hussein >Priority: Major > Attachments: TestResourceTrackerOnHA-output.2.txt, > YARN-2710-branch-2.10.001.patch, YARN-2710.001.patch, > org.apache.hadoop.yarn.client.TestResourceTrackerOnHA-output.txt > > > Failure like, it can be happened in TestApplicationClientProtocolOnHA, > TestResourceTrackerOnHA, etc. > {code} > org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA > testGetApplicationAttemptsOnHA(org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA) > Time elapsed: 9.491 sec <<< ERROR! > java.net.ConnectException: Call From asf905.gq1.ygridcore.net/67.195.81.149 > to asf905.gq1.ygridcore.net:28032 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) > at > org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493) > at > org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607) > at > org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705) > at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368) > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521) > at org.apache.hadoop.ipc.Client.call(Client.java:1438) > at org.apache.hadoop.ipc.Client.call(Client.java:1399) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) > at com.sun.proxy.$Proxy17.getApplicationAttempts(Unknown Source) > at > org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplicationAttempts(ApplicationClientProtocolPBClientImpl.java:372) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101) > at com.sun.proxy.$Proxy18.getApplicationAttempts(Unknown Source) > at > org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getApplicationAttempts(YarnClientImpl.java:583) > at > org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA.testGetApplicationAttemptsOnHA(TestApplicationClientProtocolOnHA.java:137) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-2710) RM HA tests failed intermittently on trunk
[ https://issues.apache.org/jira/browse/YARN-2710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated YARN-2710: Attachment: YARN-2710.001.patch > RM HA tests failed intermittently on trunk > -- > > Key: YARN-2710 > URL: https://issues.apache.org/jira/browse/YARN-2710 > Project: Hadoop YARN > Issue Type: Bug > Components: client > Environment: Java 8, jenkins >Reporter: Wangda Tan >Assignee: Ahmed Hussein >Priority: Major > Attachments: TestResourceTrackerOnHA-output.2.txt, > YARN-2710.001.patch, > org.apache.hadoop.yarn.client.TestResourceTrackerOnHA-output.txt > > > Failure like, it can be happened in TestApplicationClientProtocolOnHA, > TestResourceTrackerOnHA, etc. > {code} > org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA > testGetApplicationAttemptsOnHA(org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA) > Time elapsed: 9.491 sec <<< ERROR! > java.net.ConnectException: Call From asf905.gq1.ygridcore.net/67.195.81.149 > to asf905.gq1.ygridcore.net:28032 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) > at > org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493) > at > org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607) > at > org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705) > at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368) > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521) > at org.apache.hadoop.ipc.Client.call(Client.java:1438) > at org.apache.hadoop.ipc.Client.call(Client.java:1399) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) > at com.sun.proxy.$Proxy17.getApplicationAttempts(Unknown Source) > at > org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplicationAttempts(ApplicationClientProtocolPBClientImpl.java:372) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101) > at com.sun.proxy.$Proxy18.getApplicationAttempts(Unknown Source) > at > org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getApplicationAttempts(YarnClientImpl.java:583) > at > org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA.testGetApplicationAttemptsOnHA(TestApplicationClientProtocolOnHA.java:137) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-2710) RM HA tests failed intermittently on trunk
[ https://issues.apache.org/jira/browse/YARN-2710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein reassigned YARN-2710: --- Assignee: Ahmed Hussein > RM HA tests failed intermittently on trunk > -- > > Key: YARN-2710 > URL: https://issues.apache.org/jira/browse/YARN-2710 > Project: Hadoop YARN > Issue Type: Bug > Components: client >Affects Versions: 3.0.0-alpha1 > Environment: Java 8, jenkins >Reporter: Wangda Tan >Assignee: Ahmed Hussein >Priority: Major > Attachments: TestResourceTrackerOnHA-output.2.txt, > org.apache.hadoop.yarn.client.TestResourceTrackerOnHA-output.txt > > > Failure like, it can be happened in TestApplicationClientProtocolOnHA, > TestResourceTrackerOnHA, etc. > {code} > org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA > testGetApplicationAttemptsOnHA(org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA) > Time elapsed: 9.491 sec <<< ERROR! > java.net.ConnectException: Call From asf905.gq1.ygridcore.net/67.195.81.149 > to asf905.gq1.ygridcore.net:28032 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) > at > org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493) > at > org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607) > at > org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705) > at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368) > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521) > at org.apache.hadoop.ipc.Client.call(Client.java:1438) > at org.apache.hadoop.ipc.Client.call(Client.java:1399) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) > at com.sun.proxy.$Proxy17.getApplicationAttempts(Unknown Source) > at > org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplicationAttempts(ApplicationClientProtocolPBClientImpl.java:372) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101) > at com.sun.proxy.$Proxy18.getApplicationAttempts(Unknown Source) > at > org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getApplicationAttempts(YarnClientImpl.java:583) > at > org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA.testGetApplicationAttemptsOnHA(TestApplicationClientProtocolOnHA.java:137) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10161) TestRouterWebServicesREST is corrupting STDOUT
[ https://issues.apache.org/jira/browse/YARN-10161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated YARN-10161: --- Fix Version/s: 2.10.1 3.1.4 3.2.2 3.3.0 3.0.4 > TestRouterWebServicesREST is corrupting STDOUT > -- > > Key: YARN-10161 > URL: https://issues.apache.org/jira/browse/YARN-10161 > Project: Hadoop YARN > Issue Type: Test > Components: yarn >Affects Versions: 2.10.0, 3.2.1 >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Fix For: 3.0.4, 3.3.0, 3.2.2, 3.1.4, 2.10.1 > > Attachments: YARN-10161.001.patch, YARN-10161.002.patch, > YARN-10161.003.patch > > > TestRouterWebServicesREST is creating processes that inherit stdin/stdout > from the current process, so the output from those jobs goes into the > standard output of mvn test. > Here's an example from a recent build: > {noformat} > [WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM > 1. See FAQ web page and the dump file > /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/surefire-reports/2020-02-24T08-00-54_776-jvmRun1.dumpstream > [INFO] Tests run: 41, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: > 41.644 s - in > org.apache.hadoop.yarn.server.router.webapp.TestRouterWebServicesREST > [WARNING] ForkStarter IOException: 506 INFO [main] > resourcemanager.ResourceManager (LogAdapter.java:info(49)) - STARTUP_MSG: > 522 INFO [main] resourcemanager.ResourceManager (LogAdapter.java:info(49)) - > registered UNIX signal handlers for [TERM, HUP, INT] > 876 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2588)) - core-site.xml not > found > 879 INFO [main] security.Groups (Groups.java:refresh(402)) - clearing > userToGroupsMap cache > 930 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2588)) - resource-types.xml > not found > 930 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addResourcesFileToConf(421)) - Unable to find > 'resource-types.xml'. > 940 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addMandatoryResources(126)) - Adding resource type - name > = memory-mb, units = Mi, type = COUNTABLE > 940 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addMandatoryResources(135)) - Adding resource type - name > = vcores, units = , type = COUNTABLE > 974 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2591)) - found resource > yarn-site.xml at > file:/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/test-classes/yarn-site.xml > 001 INFO [main] event.AsyncDispatcher (AsyncDispatcher.java:register(227)) - > Registering class > org.apache.hadoop.yarn.server.resourcemanager.RMFatalEventType for class > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMFatalEventDispatcher > 053 INFO [main] security.NMTokenSecretManagerInRM > (NMTokenSecretManagerInRM.java:(75)) - NMTokenKeyRollingInterval: > 8640ms and NMTokenKeyActivationDelay: 90ms > 060 INFO [main] security.RMContainerTokenSecretManager > (RMContainerTokenSecretManager.java:(79)) - > ContainerTokenKeyRollingInterval: 8640ms and > ContainerTokenKeyActivationDelay: 90ms > ... {noformat} > It seems like these processes should be rerouting stdout/stderr to a file > instead of dumping it to the console. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6924) Metrics for Federation AMRMProxy
[ https://issues.apache.org/jira/browse/YARN-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17047015#comment-17047015 ] Young Chen commented on YARN-6924: -- Thanks for the feedback [~bibinchundatt] - fixed the formatting & licenses in the newest patch. > Metrics for Federation AMRMProxy > > > Key: YARN-6924 > URL: https://issues.apache.org/jira/browse/YARN-6924 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Young Chen >Priority: Major > Attachments: YARN-6924.01.patch, YARN-6924.01.patch, > YARN-6924.02.patch, YARN-6924.02.patch, YARN-6924.03.patch, > YARN-6924.04.patch, YARN-6924.05.patch > > > This JIRA proposes addition of metrics for Federation AMRMProxy -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10161) TestRouterWebServicesREST is corrupting STDOUT
[ https://issues.apache.org/jira/browse/YARN-10161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046993#comment-17046993 ] Hudson commented on YARN-10161: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18009 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18009/]) YARN-10161. TestRouterWebServicesREST is corrupting STDOUT. Contributed (inigoiri: rev a43510e21d01e6c78e98e7ad9469cbea70a66466) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/JavaProcess.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestRouterWebServicesREST.java > TestRouterWebServicesREST is corrupting STDOUT > -- > > Key: YARN-10161 > URL: https://issues.apache.org/jira/browse/YARN-10161 > Project: Hadoop YARN > Issue Type: Test > Components: yarn >Affects Versions: 2.10.0, 3.2.1 >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Attachments: YARN-10161.001.patch, YARN-10161.002.patch, > YARN-10161.003.patch > > > TestRouterWebServicesREST is creating processes that inherit stdin/stdout > from the current process, so the output from those jobs goes into the > standard output of mvn test. > Here's an example from a recent build: > {noformat} > [WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM > 1. See FAQ web page and the dump file > /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/surefire-reports/2020-02-24T08-00-54_776-jvmRun1.dumpstream > [INFO] Tests run: 41, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: > 41.644 s - in > org.apache.hadoop.yarn.server.router.webapp.TestRouterWebServicesREST > [WARNING] ForkStarter IOException: 506 INFO [main] > resourcemanager.ResourceManager (LogAdapter.java:info(49)) - STARTUP_MSG: > 522 INFO [main] resourcemanager.ResourceManager (LogAdapter.java:info(49)) - > registered UNIX signal handlers for [TERM, HUP, INT] > 876 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2588)) - core-site.xml not > found > 879 INFO [main] security.Groups (Groups.java:refresh(402)) - clearing > userToGroupsMap cache > 930 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2588)) - resource-types.xml > not found > 930 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addResourcesFileToConf(421)) - Unable to find > 'resource-types.xml'. > 940 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addMandatoryResources(126)) - Adding resource type - name > = memory-mb, units = Mi, type = COUNTABLE > 940 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addMandatoryResources(135)) - Adding resource type - name > = vcores, units = , type = COUNTABLE > 974 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2591)) - found resource > yarn-site.xml at > file:/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/test-classes/yarn-site.xml > 001 INFO [main] event.AsyncDispatcher (AsyncDispatcher.java:register(227)) - > Registering class > org.apache.hadoop.yarn.server.resourcemanager.RMFatalEventType for class > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMFatalEventDispatcher > 053 INFO [main] security.NMTokenSecretManagerInRM > (NMTokenSecretManagerInRM.java:(75)) - NMTokenKeyRollingInterval: > 8640ms and NMTokenKeyActivationDelay: 90ms > 060 INFO [main] security.RMContainerTokenSecretManager > (RMContainerTokenSecretManager.java:(79)) - > ContainerTokenKeyRollingInterval: 8640ms and > ContainerTokenKeyActivationDelay: 90ms > ... {noformat} > It seems like these processes should be rerouting stdout/stderr to a file > instead of dumping it to the console. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6924) Metrics for Federation AMRMProxy
[ https://issues.apache.org/jira/browse/YARN-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046984#comment-17046984 ] Hadoop QA commented on YARN-6924: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 48s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 17s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 25m 50s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 89m 50s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-6924 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12994817/YARN-6924.05.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux de4a4360c795 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 10461e0 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25600/testReport/ | | Max. process+thread count | 336 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/25600/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Metrics for Federation AMRMProxy >
[jira] [Commented] (YARN-10161) TestRouterWebServicesREST is corrupting STDOUT
[ https://issues.apache.org/jira/browse/YARN-10161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046983#comment-17046983 ] Jim Brennan commented on YARN-10161: Thanks [~inigoiri]! > TestRouterWebServicesREST is corrupting STDOUT > -- > > Key: YARN-10161 > URL: https://issues.apache.org/jira/browse/YARN-10161 > Project: Hadoop YARN > Issue Type: Test > Components: yarn >Affects Versions: 2.10.0, 3.2.1 >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Attachments: YARN-10161.001.patch, YARN-10161.002.patch, > YARN-10161.003.patch > > > TestRouterWebServicesREST is creating processes that inherit stdin/stdout > from the current process, so the output from those jobs goes into the > standard output of mvn test. > Here's an example from a recent build: > {noformat} > [WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM > 1. See FAQ web page and the dump file > /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/surefire-reports/2020-02-24T08-00-54_776-jvmRun1.dumpstream > [INFO] Tests run: 41, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: > 41.644 s - in > org.apache.hadoop.yarn.server.router.webapp.TestRouterWebServicesREST > [WARNING] ForkStarter IOException: 506 INFO [main] > resourcemanager.ResourceManager (LogAdapter.java:info(49)) - STARTUP_MSG: > 522 INFO [main] resourcemanager.ResourceManager (LogAdapter.java:info(49)) - > registered UNIX signal handlers for [TERM, HUP, INT] > 876 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2588)) - core-site.xml not > found > 879 INFO [main] security.Groups (Groups.java:refresh(402)) - clearing > userToGroupsMap cache > 930 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2588)) - resource-types.xml > not found > 930 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addResourcesFileToConf(421)) - Unable to find > 'resource-types.xml'. > 940 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addMandatoryResources(126)) - Adding resource type - name > = memory-mb, units = Mi, type = COUNTABLE > 940 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addMandatoryResources(135)) - Adding resource type - name > = vcores, units = , type = COUNTABLE > 974 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2591)) - found resource > yarn-site.xml at > file:/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/test-classes/yarn-site.xml > 001 INFO [main] event.AsyncDispatcher (AsyncDispatcher.java:register(227)) - > Registering class > org.apache.hadoop.yarn.server.resourcemanager.RMFatalEventType for class > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMFatalEventDispatcher > 053 INFO [main] security.NMTokenSecretManagerInRM > (NMTokenSecretManagerInRM.java:(75)) - NMTokenKeyRollingInterval: > 8640ms and NMTokenKeyActivationDelay: 90ms > 060 INFO [main] security.RMContainerTokenSecretManager > (RMContainerTokenSecretManager.java:(79)) - > ContainerTokenKeyRollingInterval: 8640ms and > ContainerTokenKeyActivationDelay: 90ms > ... {noformat} > It seems like these processes should be rerouting stdout/stderr to a file > instead of dumping it to the console. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10161) TestRouterWebServicesREST is corrupting STDOUT
[ https://issues.apache.org/jira/browse/YARN-10161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046980#comment-17046980 ] Íñigo Goiri commented on YARN-10161: Thanks for the patch [~Jim_Brennan]. Committed to trunk, branch-3.2, branch-3.1, branch-3.0, and branch-2.10. > TestRouterWebServicesREST is corrupting STDOUT > -- > > Key: YARN-10161 > URL: https://issues.apache.org/jira/browse/YARN-10161 > Project: Hadoop YARN > Issue Type: Test > Components: yarn >Affects Versions: 2.10.0, 3.2.1 >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Attachments: YARN-10161.001.patch, YARN-10161.002.patch, > YARN-10161.003.patch > > > TestRouterWebServicesREST is creating processes that inherit stdin/stdout > from the current process, so the output from those jobs goes into the > standard output of mvn test. > Here's an example from a recent build: > {noformat} > [WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM > 1. See FAQ web page and the dump file > /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/surefire-reports/2020-02-24T08-00-54_776-jvmRun1.dumpstream > [INFO] Tests run: 41, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: > 41.644 s - in > org.apache.hadoop.yarn.server.router.webapp.TestRouterWebServicesREST > [WARNING] ForkStarter IOException: 506 INFO [main] > resourcemanager.ResourceManager (LogAdapter.java:info(49)) - STARTUP_MSG: > 522 INFO [main] resourcemanager.ResourceManager (LogAdapter.java:info(49)) - > registered UNIX signal handlers for [TERM, HUP, INT] > 876 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2588)) - core-site.xml not > found > 879 INFO [main] security.Groups (Groups.java:refresh(402)) - clearing > userToGroupsMap cache > 930 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2588)) - resource-types.xml > not found > 930 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addResourcesFileToConf(421)) - Unable to find > 'resource-types.xml'. > 940 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addMandatoryResources(126)) - Adding resource type - name > = memory-mb, units = Mi, type = COUNTABLE > 940 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addMandatoryResources(135)) - Adding resource type - name > = vcores, units = , type = COUNTABLE > 974 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2591)) - found resource > yarn-site.xml at > file:/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/test-classes/yarn-site.xml > 001 INFO [main] event.AsyncDispatcher (AsyncDispatcher.java:register(227)) - > Registering class > org.apache.hadoop.yarn.server.resourcemanager.RMFatalEventType for class > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMFatalEventDispatcher > 053 INFO [main] security.NMTokenSecretManagerInRM > (NMTokenSecretManagerInRM.java:(75)) - NMTokenKeyRollingInterval: > 8640ms and NMTokenKeyActivationDelay: 90ms > 060 INFO [main] security.RMContainerTokenSecretManager > (RMContainerTokenSecretManager.java:(79)) - > ContainerTokenKeyRollingInterval: 8640ms and > ContainerTokenKeyActivationDelay: 90ms > ... {noformat} > It seems like these processes should be rerouting stdout/stderr to a file > instead of dumping it to the console. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10155) TestDelegationTokenRenewer.testTokenThreadTimeout fails in trunk
[ https://issues.apache.org/jira/browse/YARN-10155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046979#comment-17046979 ] Hudson commented on YARN-10155: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18008 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18008/]) YARN-10155. TestDelegationTokenRenewer.testTokenThreadTimeout fails in (inigoiri: rev b420ddeada5300e22b8e3ad6c9ccd1549dc797c2) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestDelegationTokenRenewer.java > TestDelegationTokenRenewer.testTokenThreadTimeout fails in trunk > > > Key: YARN-10155 > URL: https://issues.apache.org/jira/browse/YARN-10155 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Manikandan R >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-10155.001.patch, testTokenThreadTimeout.txt, > testTokenThreadTimeout_with_patch.txt > > > The TestDelegationTokenRenewer.testTokenThreadTimeout test committed in > YARN-9768 often fails with timeout. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10155) TestDelegationTokenRenewer.testTokenThreadTimeout fails in trunk
[ https://issues.apache.org/jira/browse/YARN-10155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046971#comment-17046971 ] Íñigo Goiri commented on YARN-10155: Thanks [~adam.antal] for bringing this up and checking. Thanks [~maniraj...@gmail.com] for the fix. Committed to trunk. > TestDelegationTokenRenewer.testTokenThreadTimeout fails in trunk > > > Key: YARN-10155 > URL: https://issues.apache.org/jira/browse/YARN-10155 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Manikandan R >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-10155.001.patch, testTokenThreadTimeout.txt, > testTokenThreadTimeout_with_patch.txt > > > The TestDelegationTokenRenewer.testTokenThreadTimeout test committed in > YARN-9768 often fails with timeout. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Resolved] (YARN-10155) TestDelegationTokenRenewer.testTokenThreadTimeout fails in trunk
[ https://issues.apache.org/jira/browse/YARN-10155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri resolved YARN-10155. Fix Version/s: 3.3.0 Hadoop Flags: Reviewed Resolution: Fixed > TestDelegationTokenRenewer.testTokenThreadTimeout fails in trunk > > > Key: YARN-10155 > URL: https://issues.apache.org/jira/browse/YARN-10155 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Manikandan R >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-10155.001.patch, testTokenThreadTimeout.txt, > testTokenThreadTimeout_with_patch.txt > > > The TestDelegationTokenRenewer.testTokenThreadTimeout test committed in > YARN-9768 often fails with timeout. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10161) TestRouterWebServicesREST is corrupting STDOUT
[ https://issues.apache.org/jira/browse/YARN-10161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046970#comment-17046970 ] Jim Brennan commented on YARN-10161: Thanks [~inigoiri]! Can you commit this to trunk and other branches? I've verified that the patch applies cleanly to branch-2.10, which is where I noticed the problem. > TestRouterWebServicesREST is corrupting STDOUT > -- > > Key: YARN-10161 > URL: https://issues.apache.org/jira/browse/YARN-10161 > Project: Hadoop YARN > Issue Type: Test > Components: yarn >Affects Versions: 2.10.0, 3.2.1 >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Attachments: YARN-10161.001.patch, YARN-10161.002.patch, > YARN-10161.003.patch > > > TestRouterWebServicesREST is creating processes that inherit stdin/stdout > from the current process, so the output from those jobs goes into the > standard output of mvn test. > Here's an example from a recent build: > {noformat} > [WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM > 1. See FAQ web page and the dump file > /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/surefire-reports/2020-02-24T08-00-54_776-jvmRun1.dumpstream > [INFO] Tests run: 41, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: > 41.644 s - in > org.apache.hadoop.yarn.server.router.webapp.TestRouterWebServicesREST > [WARNING] ForkStarter IOException: 506 INFO [main] > resourcemanager.ResourceManager (LogAdapter.java:info(49)) - STARTUP_MSG: > 522 INFO [main] resourcemanager.ResourceManager (LogAdapter.java:info(49)) - > registered UNIX signal handlers for [TERM, HUP, INT] > 876 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2588)) - core-site.xml not > found > 879 INFO [main] security.Groups (Groups.java:refresh(402)) - clearing > userToGroupsMap cache > 930 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2588)) - resource-types.xml > not found > 930 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addResourcesFileToConf(421)) - Unable to find > 'resource-types.xml'. > 940 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addMandatoryResources(126)) - Adding resource type - name > = memory-mb, units = Mi, type = COUNTABLE > 940 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addMandatoryResources(135)) - Adding resource type - name > = vcores, units = , type = COUNTABLE > 974 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2591)) - found resource > yarn-site.xml at > file:/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/test-classes/yarn-site.xml > 001 INFO [main] event.AsyncDispatcher (AsyncDispatcher.java:register(227)) - > Registering class > org.apache.hadoop.yarn.server.resourcemanager.RMFatalEventType for class > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMFatalEventDispatcher > 053 INFO [main] security.NMTokenSecretManagerInRM > (NMTokenSecretManagerInRM.java:(75)) - NMTokenKeyRollingInterval: > 8640ms and NMTokenKeyActivationDelay: 90ms > 060 INFO [main] security.RMContainerTokenSecretManager > (RMContainerTokenSecretManager.java:(79)) - > ContainerTokenKeyRollingInterval: 8640ms and > ContainerTokenKeyActivationDelay: 90ms > ... {noformat} > It seems like these processes should be rerouting stdout/stderr to a file > instead of dumping it to the console. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10155) TestDelegationTokenRenewer.testTokenThreadTimeout fails in trunk
[ https://issues.apache.org/jira/browse/YARN-10155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046966#comment-17046966 ] Íñigo Goiri commented on YARN-10155: Let's go with [^YARN-10155.001.patch]. +1 on [^YARN-10155.001.patch]. Let's keep an eye once merged. > TestDelegationTokenRenewer.testTokenThreadTimeout fails in trunk > > > Key: YARN-10155 > URL: https://issues.apache.org/jira/browse/YARN-10155 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Manikandan R >Priority: Major > Attachments: YARN-10155.001.patch, testTokenThreadTimeout.txt, > testTokenThreadTimeout_with_patch.txt > > > The TestDelegationTokenRenewer.testTokenThreadTimeout test committed in > YARN-9768 often fails with timeout. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9879) Allow multiple leaf queues with the same name in CS
[ https://issues.apache.org/jira/browse/YARN-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046963#comment-17046963 ] Hadoop QA commented on YARN-9879: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 16 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 8s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 1s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 16s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 15s{color} | {color:orange} root: The patch generated 83 new + 2144 unchanged - 5 fixed = 2227 total (was 2149) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 15s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 40s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 7s{color} | {color:green} hadoop-sls in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 51s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}196m 42s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestParentQueue | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSurgicalPreemption | | | hadoop.yarn.server.resourcemanager.TestClientRMService | | | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivitiesWithMultiNodesEnabled | | | hadoop.yarn.server.resourcemanager.monitor.capacity.TestProportionalCapacityPreemptionPolicyPreemptToBalance | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestAbsoluteResourceConfiguration | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueParsing | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerNodeLabelUpdate | | |
[jira] [Updated] (YARN-6924) Metrics for Federation AMRMProxy
[ https://issues.apache.org/jira/browse/YARN-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Young Chen updated YARN-6924: - Attachment: YARN-6924.05.patch > Metrics for Federation AMRMProxy > > > Key: YARN-6924 > URL: https://issues.apache.org/jira/browse/YARN-6924 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Young Chen >Priority: Major > Attachments: YARN-6924.01.patch, YARN-6924.01.patch, > YARN-6924.02.patch, YARN-6924.02.patch, YARN-6924.03.patch, > YARN-6924.04.patch, YARN-6924.05.patch > > > This JIRA proposes addition of metrics for Federation AMRMProxy -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10148) Add Unit test for queue ACL for both FS and CS
[ https://issues.apache.org/jira/browse/YARN-10148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046930#comment-17046930 ] Hudson commented on YARN-10148: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18007 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18007/]) YARN-10148. Add Unit test for queue ACL for both FS and CS. Contributed (snemeth: rev 10461e01932bcd82a9d4e3ab8109df7ead560b14) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerQueueACLs.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairSchedulerQueueACLs.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/ACLsTestBase.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/QueueACLsTestBase.java > Add Unit test for queue ACL for both FS and CS > -- > > Key: YARN-10148 > URL: https://issues.apache.org/jira/browse/YARN-10148 > Project: Hadoop YARN > Issue Type: Improvement > Components: scheduler >Reporter: Kinga Marton >Assignee: Kinga Marton >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-10148.001.patch, YARN-10148.002.patch, > YARN-10148.003.patch, YARN-10148.004.patch, YARN-10148.005.patch, > YARN-10148.006.patch > > > Add some unit tests covering the queue ACL evaluation for both FS and CS. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10148) Add Unit test for queue ACL for both FS and CS
[ https://issues.apache.org/jira/browse/YARN-10148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046914#comment-17046914 ] Szilard Nemeth commented on YARN-10148: --- Thanks [~kmarton] for your patch, committed to trunk. Thanks [~adam.antal] for the review. > Add Unit test for queue ACL for both FS and CS > -- > > Key: YARN-10148 > URL: https://issues.apache.org/jira/browse/YARN-10148 > Project: Hadoop YARN > Issue Type: Improvement > Components: scheduler >Reporter: Kinga Marton >Assignee: Kinga Marton >Priority: Major > Attachments: YARN-10148.001.patch, YARN-10148.002.patch, > YARN-10148.003.patch, YARN-10148.004.patch, YARN-10148.005.patch, > YARN-10148.006.patch > > > Add some unit tests covering the queue ACL evaluation for both FS and CS. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10148) Add Unit test for queue ACL for both FS and CS
[ https://issues.apache.org/jira/browse/YARN-10148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-10148: -- Fix Version/s: 3.3.0 > Add Unit test for queue ACL for both FS and CS > -- > > Key: YARN-10148 > URL: https://issues.apache.org/jira/browse/YARN-10148 > Project: Hadoop YARN > Issue Type: Improvement > Components: scheduler >Reporter: Kinga Marton >Assignee: Kinga Marton >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-10148.001.patch, YARN-10148.002.patch, > YARN-10148.003.patch, YARN-10148.004.patch, YARN-10148.005.patch, > YARN-10148.006.patch > > > Add some unit tests covering the queue ACL evaluation for both FS and CS. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10148) Add Unit test for queue ACL for both FS and CS
[ https://issues.apache.org/jira/browse/YARN-10148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046906#comment-17046906 ] Szilard Nemeth commented on YARN-10148: --- Hi [~kmarton], Thanks for fixing all of my comments. Just found 2 things which I have corrected based on our offline discussion: 1. Javadoc of the testcase org.apache.hadoop.yarn.server.resourcemanager.QueueACLsTestBase#testQueueAclDefaultValues was misleading: {code:java} /** * Test for the case when no ACLs are defined, so the default values are used * Expected result: The default ACLs for the root queue is "*"(all) and for * the other queues are " " (none), so the user will have access to all the * queues because they will have permissions from the root. * * @throws IOException */ {code} Specifically this part: {code:java} The default ACLs for the root queue is "*"(none) and for * the other {code} The value "*" means all so I changed "none" to "all". 2. In TestCapacitySchedulerQueueACLs#updateConfigWithDAndD1Queues: The code that sets ACLs for D and D1 queues is this: {code:java} if (queueDAcl != null) { setAdminAndSubmitACL(csConf, queueDAcl, dPath); csConf.setAcl(dPath, QueueACL.ADMINISTER_QUEUE, queueDAcl); csConf.setAcl(dPath, QueueACL.SUBMIT_APPLICATIONS, queueDAcl); } if (queueD1Acl != null) { setAdminAndSubmitACL(csConf, d1Path, queueD1Acl); csConf.setAcl(d1Path, QueueACL.ADMINISTER_QUEUE, queueD1Acl); csConf.setAcl(d1Path, QueueACL.SUBMIT_APPLICATIONS, queueD1Acl); } {code} Here, you extracted the setAdminAndSubmitACL method. Additional calls to csConf.setAcl() are left there in both conditions, accidentally so I removed them. > Add Unit test for queue ACL for both FS and CS > -- > > Key: YARN-10148 > URL: https://issues.apache.org/jira/browse/YARN-10148 > Project: Hadoop YARN > Issue Type: Improvement > Components: scheduler >Reporter: Kinga Marton >Assignee: Kinga Marton >Priority: Major > Attachments: YARN-10148.001.patch, YARN-10148.002.patch, > YARN-10148.003.patch, YARN-10148.004.patch, YARN-10148.005.patch, > YARN-10148.006.patch > > > Add some unit tests covering the queue ACL evaluation for both FS and CS. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8529) Add timeout to RouterWebServiceUtil#invokeRMWebService
[ https://issues.apache.org/jira/browse/YARN-8529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046861#comment-17046861 ] Hadoop QA commented on YARN-8529: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 42s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 33s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 13s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 17s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 2 new + 214 unchanged - 1 fixed = 216 total (was 215) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 30s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 52s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 35s{color} | {color:green} hadoop-yarn-server-router in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 39s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 85m 24s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-8529 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12994801/YARN-8529.v9.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6559b79b41df 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 57aa048 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | checkstyle |
[jira] [Commented] (YARN-10148) Add Unit test for queue ACL for both FS and CS
[ https://issues.apache.org/jira/browse/YARN-10148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046857#comment-17046857 ] Hadoop QA commented on YARN-10148: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 45s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 16s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 90m 37s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}150m 59s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-10148 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12994797/YARN-10148.006.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a96d9d39c8c8 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2059f25 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25595/testReport/ | | Max. process+thread count | 837 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/25595/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Add Unit test for queue ACL for both FS and CS >
[jira] [Updated] (YARN-10110) In Federation Secure cluster Application submission fails when authorization is enabled
[ https://issues.apache.org/jira/browse/YARN-10110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilwa S T updated YARN-10110: - Attachment: YARN-10110.003.patch > In Federation Secure cluster Application submission fails when authorization > is enabled > --- > > Key: YARN-10110 > URL: https://issues.apache.org/jira/browse/YARN-10110 > Project: Hadoop YARN > Issue Type: Bug > Components: federation >Reporter: Sushanta Sen >Assignee: Bilwa S T >Priority: Blocker > Attachments: YARN-10110.001.patch, YARN-10110.002.patch, > YARN-10110.003.patch > > > 【Precondition】: > 1. Secure Federated cluster is available > 2. Add the below configuration in Router and client core-site.xml > hadoop.security.authorization= true > 3. Restart the router service > 【Test step】: > 1. Go to router client bin path and submit a MR PI job > 2. Observe the client console screen > 【Expect Output】: > No error should be thrown and Job should be successful > 【Actual Output】: > Job failed prompting "Protocol interface > org.apache.hadoop.yarn.api.ApplicationClientProtocolPB is not known.," > 【Additional Note】: > But on setting the parameter as false, job is submitted and success. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10161) TestRouterWebServicesREST is corrupting STDOUT
[ https://issues.apache.org/jira/browse/YARN-10161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046850#comment-17046850 ] Íñigo Goiri commented on YARN-10161: Yes, that was it. +1 on [^YARN-10161.003.patch]. > TestRouterWebServicesREST is corrupting STDOUT > -- > > Key: YARN-10161 > URL: https://issues.apache.org/jira/browse/YARN-10161 > Project: Hadoop YARN > Issue Type: Test > Components: yarn >Affects Versions: 2.10.0, 3.2.1 >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Attachments: YARN-10161.001.patch, YARN-10161.002.patch, > YARN-10161.003.patch > > > TestRouterWebServicesREST is creating processes that inherit stdin/stdout > from the current process, so the output from those jobs goes into the > standard output of mvn test. > Here's an example from a recent build: > {noformat} > [WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM > 1. See FAQ web page and the dump file > /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/surefire-reports/2020-02-24T08-00-54_776-jvmRun1.dumpstream > [INFO] Tests run: 41, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: > 41.644 s - in > org.apache.hadoop.yarn.server.router.webapp.TestRouterWebServicesREST > [WARNING] ForkStarter IOException: 506 INFO [main] > resourcemanager.ResourceManager (LogAdapter.java:info(49)) - STARTUP_MSG: > 522 INFO [main] resourcemanager.ResourceManager (LogAdapter.java:info(49)) - > registered UNIX signal handlers for [TERM, HUP, INT] > 876 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2588)) - core-site.xml not > found > 879 INFO [main] security.Groups (Groups.java:refresh(402)) - clearing > userToGroupsMap cache > 930 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2588)) - resource-types.xml > not found > 930 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addResourcesFileToConf(421)) - Unable to find > 'resource-types.xml'. > 940 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addMandatoryResources(126)) - Adding resource type - name > = memory-mb, units = Mi, type = COUNTABLE > 940 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addMandatoryResources(135)) - Adding resource type - name > = vcores, units = , type = COUNTABLE > 974 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2591)) - found resource > yarn-site.xml at > file:/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/test-classes/yarn-site.xml > 001 INFO [main] event.AsyncDispatcher (AsyncDispatcher.java:register(227)) - > Registering class > org.apache.hadoop.yarn.server.resourcemanager.RMFatalEventType for class > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMFatalEventDispatcher > 053 INFO [main] security.NMTokenSecretManagerInRM > (NMTokenSecretManagerInRM.java:(75)) - NMTokenKeyRollingInterval: > 8640ms and NMTokenKeyActivationDelay: 90ms > 060 INFO [main] security.RMContainerTokenSecretManager > (RMContainerTokenSecretManager.java:(79)) - > ContainerTokenKeyRollingInterval: 8640ms and > ContainerTokenKeyActivationDelay: 90ms > ... {noformat} > It seems like these processes should be rerouting stdout/stderr to a file > instead of dumping it to the console. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10110) In Federation Secure cluster Application submission fails when authorization is enabled
[ https://issues.apache.org/jira/browse/YARN-10110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilwa S T updated YARN-10110: - Attachment: (was: YARN-10110.003.patch) > In Federation Secure cluster Application submission fails when authorization > is enabled > --- > > Key: YARN-10110 > URL: https://issues.apache.org/jira/browse/YARN-10110 > Project: Hadoop YARN > Issue Type: Bug > Components: federation >Reporter: Sushanta Sen >Assignee: Bilwa S T >Priority: Blocker > Attachments: YARN-10110.001.patch, YARN-10110.002.patch > > > 【Precondition】: > 1. Secure Federated cluster is available > 2. Add the below configuration in Router and client core-site.xml > hadoop.security.authorization= true > 3. Restart the router service > 【Test step】: > 1. Go to router client bin path and submit a MR PI job > 2. Observe the client console screen > 【Expect Output】: > No error should be thrown and Job should be successful > 【Actual Output】: > Job failed prompting "Protocol interface > org.apache.hadoop.yarn.api.ApplicationClientProtocolPB is not known.," > 【Additional Note】: > But on setting the parameter as false, job is submitted and success. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10110) In Federation Secure cluster Application submission fails when authorization is enabled
[ https://issues.apache.org/jira/browse/YARN-10110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilwa S T updated YARN-10110: - Attachment: YARN-10110.003.patch > In Federation Secure cluster Application submission fails when authorization > is enabled > --- > > Key: YARN-10110 > URL: https://issues.apache.org/jira/browse/YARN-10110 > Project: Hadoop YARN > Issue Type: Bug > Components: federation >Reporter: Sushanta Sen >Assignee: Bilwa S T >Priority: Blocker > Attachments: YARN-10110.001.patch, YARN-10110.002.patch, > YARN-10110.003.patch > > > 【Precondition】: > 1. Secure Federated cluster is available > 2. Add the below configuration in Router and client core-site.xml > hadoop.security.authorization= true > 3. Restart the router service > 【Test step】: > 1. Go to router client bin path and submit a MR PI job > 2. Observe the client console screen > 【Expect Output】: > No error should be thrown and Job should be successful > 【Actual Output】: > Job failed prompting "Protocol interface > org.apache.hadoop.yarn.api.ApplicationClientProtocolPB is not known.," > 【Additional Note】: > But on setting the parameter as false, job is submitted and success. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10120) In Federation Router Nodes/Applications/About pages throws 500 exception when https is enabled
[ https://issues.apache.org/jira/browse/YARN-10120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilwa S T updated YARN-10120: - Component/s: (was: yarn) federation > In Federation Router Nodes/Applications/About pages throws 500 exception when > https is enabled > -- > > Key: YARN-10120 > URL: https://issues.apache.org/jira/browse/YARN-10120 > Project: Hadoop YARN > Issue Type: Bug > Components: federation >Reporter: Sushanta Sen >Assignee: Bilwa S T >Priority: Critical > > In Federation Router Nodes/Applications/About pages throws 500 exception when > https is enabled. > yarn.router.webapp.https.address =router ip:8091 > {noformat} > 2020-02-07 16:38:49,990 ERROR org.apache.hadoop.yarn.webapp.Dispatcher: error > handling URI: /cluster/apps > java.lang.reflect.InvocationTargetException > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:166) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287) > at > com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277) > at > com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:182) > at > com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91) > at > com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:941) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:875) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:829) > at > com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82) > at > com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:119) > at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:133) > at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:130) > at > com.google.inject.servlet.GuiceFilter$Context.call(GuiceFilter.java:203) > at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:130) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767) > at > org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767) > at > org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644) > at > org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767) > at > org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1622) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767) > at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at
[jira] [Commented] (YARN-10161) TestRouterWebServicesREST is corrupting STDOUT
[ https://issues.apache.org/jira/browse/YARN-10161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046843#comment-17046843 ] Hadoop QA commented on YARN-10161: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 40s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 53s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 59s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 32s{color} | {color:green} hadoop-yarn-server-router in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 55m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-10161 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12994802/YARN-10161.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 5114ee664e68 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 57aa048 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25597/testReport/ | | Max. process+thread count | 764 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/25597/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > TestRouterWebServicesREST is corrupting STDOUT >
[jira] [Commented] (YARN-9831) NMTokenSecretManagerInRM#createNMToken blocks ApplicationMasterService allocate flow
[ https://issues.apache.org/jira/browse/YARN-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046840#comment-17046840 ] Manikandan R commented on YARN-9831: [~BilwaST] Thanks for the patch. Had a quick glance. Locks has been changed from "write" to "read" in {{createAndGetNMToken}} method assuming there shouldn't be any issues while adding Node Id's into Set ( nodeSet.add(container.getNodeId()); ) because it has been created ConcurrentHashMap.newKeySet(). If this is true, Should we apply the same principle to other places where in Node gets removed ( removeNodeKey() ) ? > NMTokenSecretManagerInRM#createNMToken blocks ApplicationMasterService > allocate flow > > > Key: YARN-9831 > URL: https://issues.apache.org/jira/browse/YARN-9831 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Bibin Chundatt >Assignee: Bilwa S T >Priority: Critical > Attachments: YARN-9831.001.patch, YARN-9831.002.patch > > > Currently attempt's NMToken cannot be generated independently. > Each attempts allocate flow blocks each other. We should improve the same -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9879) Allow multiple leaf queues with the same name in CS
[ https://issues.apache.org/jira/browse/YARN-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gergely Pollak updated YARN-9879: - Attachment: YARN-9879.POC006.patch > Allow multiple leaf queues with the same name in CS > --- > > Key: YARN-9879 > URL: https://issues.apache.org/jira/browse/YARN-9879 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gergely Pollak >Assignee: Gergely Pollak >Priority: Major > Labels: fs2cs > Attachments: CSQueue.getQueueUsage.txt, DesignDoc_v1.pdf, > YARN-9879.POC001.patch, YARN-9879.POC002.patch, YARN-9879.POC003.patch, > YARN-9879.POC004.patch, YARN-9879.POC005.patch, YARN-9879.POC006.patch > > > Currently the leaf queue's name must be unique regardless of its position in > the queue hierarchy. > Design doc and first proposal is being made, I'll attach it as soon as it's > done. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10161) TestRouterWebServicesREST is corrupting STDOUT
[ https://issues.apache.org/jira/browse/YARN-10161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046794#comment-17046794 ] Jim Brennan commented on YARN-10161: patch 003 fixes the whitespace issue. > TestRouterWebServicesREST is corrupting STDOUT > -- > > Key: YARN-10161 > URL: https://issues.apache.org/jira/browse/YARN-10161 > Project: Hadoop YARN > Issue Type: Test > Components: yarn >Affects Versions: 2.10.0, 3.2.1 >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Attachments: YARN-10161.001.patch, YARN-10161.002.patch, > YARN-10161.003.patch > > > TestRouterWebServicesREST is creating processes that inherit stdin/stdout > from the current process, so the output from those jobs goes into the > standard output of mvn test. > Here's an example from a recent build: > {noformat} > [WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM > 1. See FAQ web page and the dump file > /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/surefire-reports/2020-02-24T08-00-54_776-jvmRun1.dumpstream > [INFO] Tests run: 41, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: > 41.644 s - in > org.apache.hadoop.yarn.server.router.webapp.TestRouterWebServicesREST > [WARNING] ForkStarter IOException: 506 INFO [main] > resourcemanager.ResourceManager (LogAdapter.java:info(49)) - STARTUP_MSG: > 522 INFO [main] resourcemanager.ResourceManager (LogAdapter.java:info(49)) - > registered UNIX signal handlers for [TERM, HUP, INT] > 876 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2588)) - core-site.xml not > found > 879 INFO [main] security.Groups (Groups.java:refresh(402)) - clearing > userToGroupsMap cache > 930 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2588)) - resource-types.xml > not found > 930 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addResourcesFileToConf(421)) - Unable to find > 'resource-types.xml'. > 940 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addMandatoryResources(126)) - Adding resource type - name > = memory-mb, units = Mi, type = COUNTABLE > 940 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addMandatoryResources(135)) - Adding resource type - name > = vcores, units = , type = COUNTABLE > 974 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2591)) - found resource > yarn-site.xml at > file:/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/test-classes/yarn-site.xml > 001 INFO [main] event.AsyncDispatcher (AsyncDispatcher.java:register(227)) - > Registering class > org.apache.hadoop.yarn.server.resourcemanager.RMFatalEventType for class > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMFatalEventDispatcher > 053 INFO [main] security.NMTokenSecretManagerInRM > (NMTokenSecretManagerInRM.java:(75)) - NMTokenKeyRollingInterval: > 8640ms and NMTokenKeyActivationDelay: 90ms > 060 INFO [main] security.RMContainerTokenSecretManager > (RMContainerTokenSecretManager.java:(79)) - > ContainerTokenKeyRollingInterval: 8640ms and > ContainerTokenKeyActivationDelay: 90ms > ... {noformat} > It seems like these processes should be rerouting stdout/stderr to a file > instead of dumping it to the console. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10161) TestRouterWebServicesREST is corrupting STDOUT
[ https://issues.apache.org/jira/browse/YARN-10161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Brennan updated YARN-10161: --- Attachment: YARN-10161.003.patch > TestRouterWebServicesREST is corrupting STDOUT > -- > > Key: YARN-10161 > URL: https://issues.apache.org/jira/browse/YARN-10161 > Project: Hadoop YARN > Issue Type: Test > Components: yarn >Affects Versions: 2.10.0, 3.2.1 >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Attachments: YARN-10161.001.patch, YARN-10161.002.patch, > YARN-10161.003.patch > > > TestRouterWebServicesREST is creating processes that inherit stdin/stdout > from the current process, so the output from those jobs goes into the > standard output of mvn test. > Here's an example from a recent build: > {noformat} > [WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM > 1. See FAQ web page and the dump file > /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/surefire-reports/2020-02-24T08-00-54_776-jvmRun1.dumpstream > [INFO] Tests run: 41, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: > 41.644 s - in > org.apache.hadoop.yarn.server.router.webapp.TestRouterWebServicesREST > [WARNING] ForkStarter IOException: 506 INFO [main] > resourcemanager.ResourceManager (LogAdapter.java:info(49)) - STARTUP_MSG: > 522 INFO [main] resourcemanager.ResourceManager (LogAdapter.java:info(49)) - > registered UNIX signal handlers for [TERM, HUP, INT] > 876 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2588)) - core-site.xml not > found > 879 INFO [main] security.Groups (Groups.java:refresh(402)) - clearing > userToGroupsMap cache > 930 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2588)) - resource-types.xml > not found > 930 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addResourcesFileToConf(421)) - Unable to find > 'resource-types.xml'. > 940 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addMandatoryResources(126)) - Adding resource type - name > = memory-mb, units = Mi, type = COUNTABLE > 940 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addMandatoryResources(135)) - Adding resource type - name > = vcores, units = , type = COUNTABLE > 974 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2591)) - found resource > yarn-site.xml at > file:/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/test-classes/yarn-site.xml > 001 INFO [main] event.AsyncDispatcher (AsyncDispatcher.java:register(227)) - > Registering class > org.apache.hadoop.yarn.server.resourcemanager.RMFatalEventType for class > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMFatalEventDispatcher > 053 INFO [main] security.NMTokenSecretManagerInRM > (NMTokenSecretManagerInRM.java:(75)) - NMTokenKeyRollingInterval: > 8640ms and NMTokenKeyActivationDelay: 90ms > 060 INFO [main] security.RMContainerTokenSecretManager > (RMContainerTokenSecretManager.java:(79)) - > ContainerTokenKeyRollingInterval: 8640ms and > ContainerTokenKeyActivationDelay: 90ms > ... {noformat} > It seems like these processes should be rerouting stdout/stderr to a file > instead of dumping it to the console. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8529) Add timeout to RouterWebServiceUtil#invokeRMWebService
[ https://issues.apache.org/jira/browse/YARN-8529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Minni Mittal updated YARN-8529: --- Attachment: YARN-8529.v9.patch > Add timeout to RouterWebServiceUtil#invokeRMWebService > -- > > Key: YARN-8529 > URL: https://issues.apache.org/jira/browse/YARN-8529 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Íñigo Goiri >Assignee: Minni Mittal >Priority: Major > Attachments: YARN-8529.v1.patch, YARN-8529.v2.patch, > YARN-8529.v3.patch, YARN-8529.v4.patch, YARN-8529.v5.patch, > YARN-8529.v6.patch, YARN-8529.v7.patch, YARN-8529.v8.patch, YARN-8529.v9.patch > > > {{RouterWebServiceUtil#invokeRMWebService}} currently has a fixed timeout. > This should be configurable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10155) TestDelegationTokenRenewer.testTokenThreadTimeout fails in trunk
[ https://issues.apache.org/jira/browse/YARN-10155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046790#comment-17046790 ] Adam Antal commented on YARN-10155: --- I agree on that. Thanks! > TestDelegationTokenRenewer.testTokenThreadTimeout fails in trunk > > > Key: YARN-10155 > URL: https://issues.apache.org/jira/browse/YARN-10155 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Manikandan R >Priority: Major > Attachments: YARN-10155.001.patch, testTokenThreadTimeout.txt, > testTokenThreadTimeout_with_patch.txt > > > The TestDelegationTokenRenewer.testTokenThreadTimeout test committed in > YARN-9768 often fails with timeout. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6924) Metrics for Federation AMRMProxy
[ https://issues.apache.org/jira/browse/YARN-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046785#comment-17046785 ] Bibin Chundatt edited comment on YARN-6924 at 2/27/20 4:33 PM: --- [~youchen] Over all the patch looks good.. Minor nits : * Annotation and the method signature to be in different lines * Same applies for the variables too in AMRMProxyMetrics. * Since the testcase are in same package the visibility for get methods could be package private. * Correct the apache source file copyright headers too. was (Author: bibinchundatt): [~youchen] Over all the patch looks good.. Minor nits : * Annotation and the method signature to be in different lines * Same applies for the variables too in AMRMProxyMetrics. * Since the testcase are in same package the visibility for get methods could be package private. > Metrics for Federation AMRMProxy > > > Key: YARN-6924 > URL: https://issues.apache.org/jira/browse/YARN-6924 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Young Chen >Priority: Major > Attachments: YARN-6924.01.patch, YARN-6924.01.patch, > YARN-6924.02.patch, YARN-6924.02.patch, YARN-6924.03.patch, YARN-6924.04.patch > > > This JIRA proposes addition of metrics for Federation AMRMProxy -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6924) Metrics for Federation AMRMProxy
[ https://issues.apache.org/jira/browse/YARN-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046785#comment-17046785 ] Bibin Chundatt commented on YARN-6924: -- [~youchen] Over all the patch looks good.. Minor nits : * Annotation and the method signature to be in different lines * Same applies for the variables too in AMRMProxyMetrics. * Since the testcase are in same package the visibility for get methods could be package private. > Metrics for Federation AMRMProxy > > > Key: YARN-6924 > URL: https://issues.apache.org/jira/browse/YARN-6924 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Young Chen >Priority: Major > Attachments: YARN-6924.01.patch, YARN-6924.01.patch, > YARN-6924.02.patch, YARN-6924.02.patch, YARN-6924.03.patch, YARN-6924.04.patch > > > This JIRA proposes addition of metrics for Federation AMRMProxy -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10161) TestRouterWebServicesREST is corrupting STDOUT
[ https://issues.apache.org/jira/browse/YARN-10161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046784#comment-17046784 ] Hadoop QA commented on YARN-10161: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 55s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 40s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 35s{color} | {color:green} hadoop-yarn-server-router in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 57m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-10161 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12994796/YARN-10161.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c3f27b782849 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2059f25 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | whitespace | https://builds.apache.org/job/PreCommit-YARN-Build/25594/artifact/out/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25594/testReport/ | | Max. process+thread count | 754 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/25594/console |
[jira] [Commented] (YARN-10155) TestDelegationTokenRenewer.testTokenThreadTimeout fails in trunk
[ https://issues.apache.org/jira/browse/YARN-10155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046758#comment-17046758 ] Manikandan R commented on YARN-10155: - Ok. However, we can get this patch in as it fixes the exception and reduces the waiting time. Post that, we can see the behaviour in Jenkins for some time and then close the Jira based on the results. Thoughts? > TestDelegationTokenRenewer.testTokenThreadTimeout fails in trunk > > > Key: YARN-10155 > URL: https://issues.apache.org/jira/browse/YARN-10155 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Manikandan R >Priority: Major > Attachments: YARN-10155.001.patch, testTokenThreadTimeout.txt, > testTokenThreadTimeout_with_patch.txt > > > The TestDelegationTokenRenewer.testTokenThreadTimeout test committed in > YARN-9768 often fails with timeout. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10167) FS-CS Converter: Need validate c-s.xml after converting
[ https://issues.apache.org/jira/browse/YARN-10167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046749#comment-17046749 ] Wangda Tan commented on YARN-10167: --- [~pbacsko], agree with this: {quote}Note that the converter itself already starts an FS instance inside to parse and load the allocation file. We can do the same thing with CS. Just load the converted config along with the delta {{yarn-site.xml}} (which essentially means that we merge the original site + the delta) and let's see if it can start. {quote} We can check if MiniYARNCluster can help or not. I'm not sure if we can directly initialize CS or not, since it has other module dependencies. > FS-CS Converter: Need validate c-s.xml after converting > --- > > Key: YARN-10167 > URL: https://issues.apache.org/jira/browse/YARN-10167 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Priority: Major > Labels: fs2cs, newbie > > Currently we just generated c-s.xml, but we didn't validate that. To make > sure the c-s.xml is correct after conversion, it's better to initialize the > CS scheduler using configs. > Also, in the test, we should try to leverage MockRM to validate generated > configs as much as we could. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10148) Add Unit test for queue ACL for both FS and CS
[ https://issues.apache.org/jira/browse/YARN-10148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kinga Marton updated YARN-10148: Attachment: YARN-10148.006.patch > Add Unit test for queue ACL for both FS and CS > -- > > Key: YARN-10148 > URL: https://issues.apache.org/jira/browse/YARN-10148 > Project: Hadoop YARN > Issue Type: Improvement > Components: scheduler >Reporter: Kinga Marton >Assignee: Kinga Marton >Priority: Major > Attachments: YARN-10148.001.patch, YARN-10148.002.patch, > YARN-10148.003.patch, YARN-10148.004.patch, YARN-10148.005.patch, > YARN-10148.006.patch > > > Add some unit tests covering the queue ACL evaluation for both FS and CS. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10148) Add Unit test for queue ACL for both FS and CS
[ https://issues.apache.org/jira/browse/YARN-10148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046743#comment-17046743 ] Kinga Marton commented on YARN-10148: - I have attached a new patch with the reported of the checkstyle issues. > Add Unit test for queue ACL for both FS and CS > -- > > Key: YARN-10148 > URL: https://issues.apache.org/jira/browse/YARN-10148 > Project: Hadoop YARN > Issue Type: Improvement > Components: scheduler >Reporter: Kinga Marton >Assignee: Kinga Marton >Priority: Major > Attachments: YARN-10148.001.patch, YARN-10148.002.patch, > YARN-10148.003.patch, YARN-10148.004.patch, YARN-10148.005.patch, > YARN-10148.006.patch > > > Add some unit tests covering the queue ACL evaluation for both FS and CS. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10148) Add Unit test for queue ACL for both FS and CS
[ https://issues.apache.org/jira/browse/YARN-10148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046737#comment-17046737 ] Hadoop QA commented on YARN-10148: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 46s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 39s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 28s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 10 new + 9 unchanged - 0 fixed = 19 total (was 9) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 22s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 44s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}167m 45s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-10148 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12994767/YARN-10148.005.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 5e9969988a85 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2059f25 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/25593/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit |
[jira] [Commented] (YARN-10168) FS-CS Convert: Converter tool doesn't handle min/max resource conversion correct
[ https://issues.apache.org/jira/browse/YARN-10168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046736#comment-17046736 ] Wangda Tan commented on YARN-10168: --- [~pbacsko], what you mentioned are all make sense to me. I think we should only support weight convert to capacity for now (and set max to 100). It will have reliable behavior. This is the only blocker issue we need to fix. For min/maxResource we should push to another JIRA. > FS-CS Convert: Converter tool doesn't handle min/max resource conversion > correct > > > Key: YARN-10168 > URL: https://issues.apache.org/jira/browse/YARN-10168 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Priority: Blocker > > Trying to understand logics of convert min and max resource from FS to CS, > and found some issues: > 1) > In FSQueueConverter#emitMaximumCapacity > Existing logic in FS is to either specify a maximum percentage for queues > against cluster resources. Or, specify an absolute valued maximum resource. > In the existing FS2CS converter, when a percentage-based maximum resource is > specified, the converter takes a global resource from fs2cs CLI, and applies > percentages to that. It is not correct since the percentage-based value will > get lost, and in the future when cluster resources go up and down, the > maximum resource cannot be changed. > 2) > The logic to deal with min/weight resource is also questionable: > The existing fs2cs tool, it takes precedence of percentage over > absoluteResource, and could set both to a queue config. See > FSQueueConverter.Capacity#toString > However, in CS, comparing to FS, the weights/min resource is quite different: > CS use the same queue.capacity to specify both percentage-based or > absolute-resource-based configs (Similar to how FS deal with maximum > Resource). > The capacity defines guaranteed resource, which also impact fairshare of the > queue. (The more guaranteed resource a queue has, the larger "pie" the queue > can get if there's any additional available resource). > In FS, minResource defined the guaranteed resource, and weight defined how > much the pie can grow to. > So to me, in FS, we should pick and choose either weight or minResource to > generate CS. > 3) > In FS, mix-use of absolute-resource configs (like min/maxResource), and > percentage-based (like weight) is allowed. But in CS, it is not allowed. The > reason is discussed on YARN-5881, and find [a]Should we support specifying a > mix of percentage ... > The existing fs2cs doesn't handle the issue, which could set mixed absolute > resource and percentage-based resources. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10161) TestRouterWebServicesREST is corrupting STDOUT
[ https://issues.apache.org/jira/browse/YARN-10161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Brennan updated YARN-10161: --- Attachment: YARN-10161.002.patch > TestRouterWebServicesREST is corrupting STDOUT > -- > > Key: YARN-10161 > URL: https://issues.apache.org/jira/browse/YARN-10161 > Project: Hadoop YARN > Issue Type: Test > Components: yarn >Affects Versions: 2.10.0, 3.2.1 >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Attachments: YARN-10161.001.patch, YARN-10161.002.patch > > > TestRouterWebServicesREST is creating processes that inherit stdin/stdout > from the current process, so the output from those jobs goes into the > standard output of mvn test. > Here's an example from a recent build: > {noformat} > [WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM > 1. See FAQ web page and the dump file > /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/surefire-reports/2020-02-24T08-00-54_776-jvmRun1.dumpstream > [INFO] Tests run: 41, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: > 41.644 s - in > org.apache.hadoop.yarn.server.router.webapp.TestRouterWebServicesREST > [WARNING] ForkStarter IOException: 506 INFO [main] > resourcemanager.ResourceManager (LogAdapter.java:info(49)) - STARTUP_MSG: > 522 INFO [main] resourcemanager.ResourceManager (LogAdapter.java:info(49)) - > registered UNIX signal handlers for [TERM, HUP, INT] > 876 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2588)) - core-site.xml not > found > 879 INFO [main] security.Groups (Groups.java:refresh(402)) - clearing > userToGroupsMap cache > 930 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2588)) - resource-types.xml > not found > 930 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addResourcesFileToConf(421)) - Unable to find > 'resource-types.xml'. > 940 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addMandatoryResources(126)) - Adding resource type - name > = memory-mb, units = Mi, type = COUNTABLE > 940 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addMandatoryResources(135)) - Adding resource type - name > = vcores, units = , type = COUNTABLE > 974 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2591)) - found resource > yarn-site.xml at > file:/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/test-classes/yarn-site.xml > 001 INFO [main] event.AsyncDispatcher (AsyncDispatcher.java:register(227)) - > Registering class > org.apache.hadoop.yarn.server.resourcemanager.RMFatalEventType for class > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMFatalEventDispatcher > 053 INFO [main] security.NMTokenSecretManagerInRM > (NMTokenSecretManagerInRM.java:(75)) - NMTokenKeyRollingInterval: > 8640ms and NMTokenKeyActivationDelay: 90ms > 060 INFO [main] security.RMContainerTokenSecretManager > (RMContainerTokenSecretManager.java:(79)) - > ContainerTokenKeyRollingInterval: 8640ms and > ContainerTokenKeyActivationDelay: 90ms > ... {noformat} > It seems like these processes should be rerouting stdout/stderr to a file > instead of dumping it to the console. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9831) NMTokenSecretManagerInRM#createNMToken blocks ApplicationMasterService allocate flow
[ https://issues.apache.org/jira/browse/YARN-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046721#comment-17046721 ] Bilwa S T commented on YARN-9831: - Thanks [~ayushtkn] for reviewing. CheckStyle issues are unavoidable. [~maniraj...@gmail.com] can u please check my changes? > NMTokenSecretManagerInRM#createNMToken blocks ApplicationMasterService > allocate flow > > > Key: YARN-9831 > URL: https://issues.apache.org/jira/browse/YARN-9831 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Bibin Chundatt >Assignee: Bilwa S T >Priority: Critical > Attachments: YARN-9831.001.patch, YARN-9831.002.patch > > > Currently attempt's NMToken cannot be generated independently. > Each attempts allocate flow blocks each other. We should improve the same -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10161) TestRouterWebServicesREST is corrupting STDOUT
[ https://issues.apache.org/jira/browse/YARN-10161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046713#comment-17046713 ] Jim Brennan commented on YARN-10161: Thanks for the review [~inigoiri]! Just to make sure I understand what you are looking for - currently, patch 001 is creating: {noformat} C02V813GHTDD-lm:target jbrennan02$ ls -l test-dir total 1176 -rw-r--r-- 1 jbrennan02 staff 131284 Feb 25 11:36 TestRouterWebServicesREST-nm.log -rw-r--r-- 1 jbrennan02 staff 324296 Feb 25 11:36 TestRouterWebServicesREST-rm.log -rw-r--r-- 1 jbrennan02 staff 115423 Feb 25 11:36 TestRouterWebServicesREST-router.log {noformat} I think you are suggesting that I change it so these files are in {{test-dir/processes}}, correct? I will put up a patch with this change. > TestRouterWebServicesREST is corrupting STDOUT > -- > > Key: YARN-10161 > URL: https://issues.apache.org/jira/browse/YARN-10161 > Project: Hadoop YARN > Issue Type: Test > Components: yarn >Affects Versions: 2.10.0, 3.2.1 >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Minor > Attachments: YARN-10161.001.patch > > > TestRouterWebServicesREST is creating processes that inherit stdin/stdout > from the current process, so the output from those jobs goes into the > standard output of mvn test. > Here's an example from a recent build: > {noformat} > [WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM > 1. See FAQ web page and the dump file > /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/surefire-reports/2020-02-24T08-00-54_776-jvmRun1.dumpstream > [INFO] Tests run: 41, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: > 41.644 s - in > org.apache.hadoop.yarn.server.router.webapp.TestRouterWebServicesREST > [WARNING] ForkStarter IOException: 506 INFO [main] > resourcemanager.ResourceManager (LogAdapter.java:info(49)) - STARTUP_MSG: > 522 INFO [main] resourcemanager.ResourceManager (LogAdapter.java:info(49)) - > registered UNIX signal handlers for [TERM, HUP, INT] > 876 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2588)) - core-site.xml not > found > 879 INFO [main] security.Groups (Groups.java:refresh(402)) - clearing > userToGroupsMap cache > 930 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2588)) - resource-types.xml > not found > 930 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addResourcesFileToConf(421)) - Unable to find > 'resource-types.xml'. > 940 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addMandatoryResources(126)) - Adding resource type - name > = memory-mb, units = Mi, type = COUNTABLE > 940 INFO [main] resource.ResourceUtils > (ResourceUtils.java:addMandatoryResources(135)) - Adding resource type - name > = vcores, units = , type = COUNTABLE > 974 INFO [main] conf.Configuration > (Configuration.java:getConfResourceAsInputStream(2591)) - found resource > yarn-site.xml at > file:/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/target/test-classes/yarn-site.xml > 001 INFO [main] event.AsyncDispatcher (AsyncDispatcher.java:register(227)) - > Registering class > org.apache.hadoop.yarn.server.resourcemanager.RMFatalEventType for class > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMFatalEventDispatcher > 053 INFO [main] security.NMTokenSecretManagerInRM > (NMTokenSecretManagerInRM.java:(75)) - NMTokenKeyRollingInterval: > 8640ms and NMTokenKeyActivationDelay: 90ms > 060 INFO [main] security.RMContainerTokenSecretManager > (RMContainerTokenSecretManager.java:(79)) - > ContainerTokenKeyRollingInterval: 8640ms and > ContainerTokenKeyActivationDelay: 90ms > ... {noformat} > It seems like these processes should be rerouting stdout/stderr to a file > instead of dumping it to the console. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10173) Make pid file generation timeout configurable in case of reacquire container
[ https://issues.apache.org/jira/browse/YARN-10173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal updated YARN-10173: -- Summary: Make pid file generation timeout configurable in case of reacquire container (was: Make container execution reacquire time configurable) > Make pid file generation timeout configurable in case of reacquire container > - > > Key: YARN-10173 > URL: https://issues.apache.org/jira/browse/YARN-10173 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Minor > > We have a cluster with big nodes running lots of Docker containers. > When the NM was restarted and certain Docker containers were reacquired, > their pid files are not generated within 2 secs which is the timeout value > for this process. Let's make this configurable, so we could wait a little bit > longer. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-10173) Make container execution reacquire time configurable
Adam Antal created YARN-10173: - Summary: Make container execution reacquire time configurable Key: YARN-10173 URL: https://issues.apache.org/jira/browse/YARN-10173 Project: Hadoop YARN Issue Type: Bug Components: yarn Affects Versions: 3.3.0 Reporter: Adam Antal Assignee: Adam Antal We have a cluster with big nodes running lots of Docker containers. When the NM was restarted and certain Docker containers were reacquired, their pid files are not generated within 2 secs which is the timeout value for this process. Let's make this configurable, so we could wait a little bit longer. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-10087) ATS possible NPE on REST API when data is missing
[ https://issues.apache.org/jira/browse/YARN-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prabhu Joseph reassigned YARN-10087: Assignee: Tanu Ajmera > ATS possible NPE on REST API when data is missing > - > > Key: YARN-10087 > URL: https://issues.apache.org/jira/browse/YARN-10087 > Project: Hadoop YARN > Issue Type: Bug > Components: ATSv2 >Reporter: Wilfred Spiegelenburg >Assignee: Tanu Ajmera >Priority: Major > Labels: newbie > Attachments: ats_stack.txt > > > If the data stored by the ATS is not complete REST calls to the ATS can > return a NPE instead of results. > {{{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException"}}} > The issue shows up when the ATS was down for a short period and in that time > new applications were started. This causes certain parts of the application > data to be missing in the ATS store. In most cases this is not a problem and > data will be returned but when you start filtering data the filtering fails > throwing the NPE. > In this case the request was for: > {{http://:8188/ws/v1/applicationhistory/apps?user=hive'}} > If certain pieces of data are missing the ATS should not even consider > returning that data, filtered or not. We should not display partial or > incomplete data. > In case of the missing user information ACL checks cannot be correctly > performed and we could see more issues. > A similar issue was fixed in YARN-7118 where the queue details were missing. > It just _skips_ the app to prevent the NPE but that is not the correct thing > when the user is missing -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10166) Add detail log for ApplicationAttemptNotFoundException
[ https://issues.apache.org/jira/browse/YARN-10166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1704#comment-1704 ] Hadoop QA commented on YARN-10166: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 40s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 25s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 52s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}152m 42s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-10166 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12994765/YARN-10166-002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 70324b3959de 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2059f25 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/25592/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25592/testReport/ | | Max. process+thread count | 840 (vs. ulimit of 5500) | | modules | C:
[jira] [Commented] (YARN-9831) NMTokenSecretManagerInRM#createNMToken blocks ApplicationMasterService allocate flow
[ https://issues.apache.org/jira/browse/YARN-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046634#comment-17046634 ] Hadoop QA commented on YARN-9831: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 42s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 47s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 28s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 2 unchanged - 1 fixed = 3 total (was 3) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 14m 26s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 42s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 32s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 33s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}151m 26s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-9831 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12992314/YARN-9831.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b884f7d5cfc0 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2059f25 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | checkstyle |
[jira] [Commented] (YARN-10155) TestDelegationTokenRenewer.testTokenThreadTimeout fails in trunk
[ https://issues.apache.org/jira/browse/YARN-10155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046621#comment-17046621 ] Adam Antal commented on YARN-10155: --- I could not reproduce the test failure from mvn CLI. I saw the test failure multiple times in the mentioned Jenkins results, but that could have been some flaky issue. Let's keep this jira open for a few days, and if the issue does not reoccur, let's close this. Thanks for checking this [~maniraj...@gmail.com] and [~inigoiri]. > TestDelegationTokenRenewer.testTokenThreadTimeout fails in trunk > > > Key: YARN-10155 > URL: https://issues.apache.org/jira/browse/YARN-10155 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Manikandan R >Priority: Major > Attachments: YARN-10155.001.patch, testTokenThreadTimeout.txt, > testTokenThreadTimeout_with_patch.txt > > > The TestDelegationTokenRenewer.testTokenThreadTimeout test committed in > YARN-9768 often fails with timeout. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8286) Inform AM of container relaunch
[ https://issues.apache.org/jira/browse/YARN-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046587#comment-17046587 ] Adam Antal commented on YARN-8286: -- I removed the target version. > Inform AM of container relaunch > --- > > Key: YARN-8286 > URL: https://issues.apache.org/jira/browse/YARN-8286 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Adam Antal >Priority: Critical > > The AM may need to perform actions when a container has been relaunched. For > example, the service AM would want to change the state it has recorded for > the container and retrieve new container status for the container, in case > the container IP has changed. (The NM would also need to remove the IP it has > stored for the container, so container status calls don't return an IP for a > container that is not currently running.) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8286) Inform AM of container relaunch
[ https://issues.apache.org/jira/browse/YARN-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal updated YARN-8286: - Target Version/s: (was: 3.4.0) > Inform AM of container relaunch > --- > > Key: YARN-8286 > URL: https://issues.apache.org/jira/browse/YARN-8286 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Adam Antal >Priority: Critical > > The AM may need to perform actions when a container has been relaunched. For > example, the service AM would want to change the state it has recorded for > the container and retrieve new container status for the container, in case > the container IP has changed. (The NM would also need to remove the IP it has > stored for the container, so container status calls don't return an IP for a > container that is not currently running.) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10087) ATS possible NPE on REST API when data is missing
[ https://issues.apache.org/jira/browse/YARN-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-10087: -- Labels: newbie (was: ) > ATS possible NPE on REST API when data is missing > - > > Key: YARN-10087 > URL: https://issues.apache.org/jira/browse/YARN-10087 > Project: Hadoop YARN > Issue Type: Bug > Components: ATSv2 >Reporter: Wilfred Spiegelenburg >Priority: Major > Labels: newbie > Attachments: ats_stack.txt > > > If the data stored by the ATS is not complete REST calls to the ATS can > return a NPE instead of results. > {{{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException"}}} > The issue shows up when the ATS was down for a short period and in that time > new applications were started. This causes certain parts of the application > data to be missing in the ATS store. In most cases this is not a problem and > data will be returned but when you start filtering data the filtering fails > throwing the NPE. > In this case the request was for: > {{http://:8188/ws/v1/applicationhistory/apps?user=hive'}} > If certain pieces of data are missing the ATS should not even consider > returning that data, filtered or not. We should not display partial or > incomplete data. > In case of the missing user information ACL checks cannot be correctly > performed and we could see more issues. > A similar issue was fixed in YARN-7118 where the queue details were missing. > It just _skips_ the app to prevent the NPE but that is not the correct thing > when the user is missing -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-10172) Default ApplicationPlacementType class should be configurable
Cyrus Jackson created YARN-10172: Summary: Default ApplicationPlacementType class should be configurable Key: YARN-10172 URL: https://issues.apache.org/jira/browse/YARN-10172 Project: Hadoop YARN Issue Type: Improvement Reporter: Cyrus Jackson Assignee: Cyrus Jackson This can be useful in scheduling apps based on the configured placement type class rather than resorting to LocalityAppPlacementAllocator -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10148) Add Unit test for queue ACL for both FS and CS
[ https://issues.apache.org/jira/browse/YARN-10148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046557#comment-17046557 ] Kinga Marton commented on YARN-10148: - Thank you [~snemeth] for the review! I have addressed your comments in the newly attached patch > Add Unit test for queue ACL for both FS and CS > -- > > Key: YARN-10148 > URL: https://issues.apache.org/jira/browse/YARN-10148 > Project: Hadoop YARN > Issue Type: Improvement > Components: scheduler >Reporter: Kinga Marton >Assignee: Kinga Marton >Priority: Major > Attachments: YARN-10148.001.patch, YARN-10148.002.patch, > YARN-10148.003.patch, YARN-10148.004.patch, YARN-10148.005.patch > > > Add some unit tests covering the queue ACL evaluation for both FS and CS. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10148) Add Unit test for queue ACL for both FS and CS
[ https://issues.apache.org/jira/browse/YARN-10148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kinga Marton updated YARN-10148: Attachment: YARN-10148.005.patch > Add Unit test for queue ACL for both FS and CS > -- > > Key: YARN-10148 > URL: https://issues.apache.org/jira/browse/YARN-10148 > Project: Hadoop YARN > Issue Type: Improvement > Components: scheduler >Reporter: Kinga Marton >Assignee: Kinga Marton >Priority: Major > Attachments: YARN-10148.001.patch, YARN-10148.002.patch, > YARN-10148.003.patch, YARN-10148.004.patch, YARN-10148.005.patch > > > Add some unit tests covering the queue ACL evaluation for both FS and CS. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10167) FS-CS Converter: Need validate c-s.xml after converting
[ https://issues.apache.org/jira/browse/YARN-10167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046553#comment-17046553 ] Sunil G commented on YARN-10167: [~pbacsko] thanks for pointing out the Cluster Down scenario. I missed that. If the fs2cs tool can bringup a CS instance, then its much better. FYI, [~kmarton] has done similar effort for YARN validate mutation api call. So similar code will help here. Thoughts? > FS-CS Converter: Need validate c-s.xml after converting > --- > > Key: YARN-10167 > URL: https://issues.apache.org/jira/browse/YARN-10167 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Priority: Major > Labels: fs2cs, newbie > > Currently we just generated c-s.xml, but we didn't validate that. To make > sure the c-s.xml is correct after conversion, it's better to initialize the > CS scheduler using configs. > Also, in the test, we should try to leverage MockRM to validate generated > configs as much as we could. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-10171) Add support for increment-allocation of custom resource types
Adam Antal created YARN-10171: - Summary: Add support for increment-allocation of custom resource types Key: YARN-10171 URL: https://issues.apache.org/jira/browse/YARN-10171 Project: Hadoop YARN Issue Type: Sub-task Components: yarn Affects Versions: 3.3.0 Reporter: Adam Antal The FairScheduler's {{yarn.resource-types.memory-mb.increment-allocation}} and {{yarn.resource-types.vcores.increment-allocation}} configs are converted to the {{yarn.scheduler.minimum-allocation-*}} configs, which is fine for the vcores and memory. In case of custom resource types like GPU if {{yarn.resource-types.gpu.increment-allocation}} is set, then CS will not be aware of that. We don't have a {{yarn.scheduler.minimum-allocation-gpu}} setting for this purpose, but {{yarn.resource-types.gpu.min-allocation}} is respected by the {{ResourceCalculator}} through the {{ResourceUtils#getResourceInformationMapFromConfig}} which would provide us with the same behaviour. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10166) Add detail log for ApplicationAttemptNotFoundException
[ https://issues.apache.org/jira/browse/YARN-10166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Youquan Lin updated YARN-10166: --- Attachment: YARN-10166-002.patch Fix Version/s: (was: 3.1.3) Target Version/s: (was: 3.1.3) Affects Version/s: (was: 3.1.3) I submit a new patch, and compiled successfully locally. > Add detail log for ApplicationAttemptNotFoundException > -- > > Key: YARN-10166 > URL: https://issues.apache.org/jira/browse/YARN-10166 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Reporter: Youquan Lin >Priority: Minor > Labels: patch > Attachments: YARN-10166-001.patch, YARN-10166-002.patch > > > Suppose user A killed the app, then ApplicationMasterService will call > unregisterAttempt() for this app. Sometimes, app's AM continues to call the > alloate() method and reports an error as follows. > {code:java} > Application attempt appattempt_1582520281010_15271_01 doesn't exist in > ApplicationMasterService cache. > {code} > If user B has been watching the AM log, he will be confused why the > attempt is no longer in the ApplicationMasterService cache. So I think we can > add detail log for ApplicationAttemptNotFoundException as follows. > {code:java} > Application attempt appattempt_1582630210671_14658_01 doesn't exist in > ApplicationMasterService cache.App state: KILLED,finalStatus: KILLED > ,diagnostics: App application_1582630210671_14658 killed by userA from > 127.0.0.1 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10003) YarnConfigurationStore#checkVersion throws exception that belongs to RMStateStore
[ https://issues.apache.org/jira/browse/YARN-10003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-10003: -- Issue Type: Improvement (was: Bug) > YarnConfigurationStore#checkVersion throws exception that belongs to > RMStateStore > - > > Key: YARN-10003 > URL: https://issues.apache.org/jira/browse/YARN-10003 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Major > > RMStateVersionIncompatibleException is thrown from method "checkVersion". > Moreover, there's a TODO here saying this method is copied from RMStateStore. > We should revise this method a bit. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-10167) FS-CS Converter: Need validate c-s.xml after converting
[ https://issues.apache.org/jira/browse/YARN-10167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046466#comment-17046466 ] Peter Bacsko edited comment on YARN-10167 at 2/27/20 11:01 AM: --- [~sunilg] it's way too complicated. IMO we don't need to contact the RM for validation. As [~leftnoteasy] said, the cluster might be down. Note that the converter itself already starts an FS instance inside to parse and load the allocation file. We can do the same thing with CS. Just load the converted config along with the delta {{yarn-site.xml}} (which essentially means that we merge the original site + the delta) and let's see if it can start. If not, we might have a problem and the configuration needs adjustments. Otherwise it's good (at least from a syntactic perspective). was (Author: pbacsko): [~sunilg] I think it's way too complicated. I don't think that we need to contact the RM for validation. As [~leftnoteasy] said, the cluster might be down. Note that the converter itself already starts an FS instance inside to parse and load the allocation file. We can do the same thing with CS. Just load the converted config along with the delta {{yarn-site.xml}} (which essentially means that we merge the original site + the delta) and let's see if it can start. If not, we might have a problem and the configuration needs adjustments. Otherwise it's good (at least from a syntactic perspective). > FS-CS Converter: Need validate c-s.xml after converting > --- > > Key: YARN-10167 > URL: https://issues.apache.org/jira/browse/YARN-10167 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Priority: Major > Labels: fs2cs, newbie > > Currently we just generated c-s.xml, but we didn't validate that. To make > sure the c-s.xml is correct after conversion, it's better to initialize the > CS scheduler using configs. > Also, in the test, we should try to leverage MockRM to validate generated > configs as much as we could. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9831) NMTokenSecretManagerInRM#createNMToken blocks ApplicationMasterService allocate flow
[ https://issues.apache.org/jira/browse/YARN-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046489#comment-17046489 ] Ayush Saxena commented on YARN-9831: Thanx [~BilwaST] for the patch. On a quick look. Looks good Please fix the checkstyle warnings. [~maniraj...@gmail.com] Give a check, if this is fine with you.. > NMTokenSecretManagerInRM#createNMToken blocks ApplicationMasterService > allocate flow > > > Key: YARN-9831 > URL: https://issues.apache.org/jira/browse/YARN-9831 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Bibin Chundatt >Assignee: Bilwa S T >Priority: Critical > Attachments: YARN-9831.001.patch, YARN-9831.002.patch > > > Currently attempt's NMToken cannot be generated independently. > Each attempts allocate flow blocks each other. We should improve the same -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10160) Add auto queue creation related configs to RMWebService#CapacitySchedulerQueueInfo
[ https://issues.apache.org/jira/browse/YARN-10160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046476#comment-17046476 ] Hadoop QA commented on YARN-10160: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 50s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 53s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 30s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 2 new + 73 unchanged - 0 fixed = 75 total (was 73) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 18s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 20s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}161m 30s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-10160 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12994739/YARN-10160-002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e64076c03557 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7dfa37e | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/25590/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit |
[jira] [Updated] (YARN-9936) Support vector of capacity percentages in Capacity Scheduler configuration
[ https://issues.apache.org/jira/browse/YARN-9936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-9936: - Parent: YARN-9698 Issue Type: Sub-task (was: Improvement) > Support vector of capacity percentages in Capacity Scheduler configuration > -- > > Key: YARN-9936 > URL: https://issues.apache.org/jira/browse/YARN-9936 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Reporter: Zoltan Siegl >Assignee: Zoltan Siegl >Priority: Major > Attachments: Capacity Scheduler support of “vector of resources > percentage”.pdf > > > Currently, the Capacity Scheduler queue configuration supports two ways to > set queue capacity. > * In percentage of all available resources as a float ( eg. 25.0 ) means 25% > of the resources of its parent queue for all resource types equally (eg. 25% > of all memory, 25% of all CPU cores, and 25% of all available GPU in the > cluster) The percentages of all queues has to add up to 100%. > * In an absolute amount of resources ( e.g. > memory=4GB,vcores=20,yarn.io/gpu=4 ). The amount of all resources in the > queues has to be less than or equal to all resources in the cluster. > Apart from these two already existing ways, there is a demand to add capacity > percentage of each available resource type separately. (eg. > {{memory=20%,vcores=40%,yarn.io/gpu=100%}}). > At the same time, a similar concept should be included with queues > maximum-capacity as well. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10167) FS-CS Converter: Need validate c-s.xml after converting
[ https://issues.apache.org/jira/browse/YARN-10167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046466#comment-17046466 ] Peter Bacsko commented on YARN-10167: - [~sunilg] I think it's way too complicated. I don't think that we need to contact the RM for validation. As [~leftnoteasy] said, the cluster might be down. Note that the converter itself already starts an FS instance inside to parse and load the allocation file. We can do the same thing with CS. Just load the converted config along with the delta {{yarn-site.xml}} (which essentially means that we merge the original site + the delta) and let's see if it can start. If not, we might have a problem and the configuration needs adjustments. Otherwise it's good (at least from a syntactic perspective). > FS-CS Converter: Need validate c-s.xml after converting > --- > > Key: YARN-10167 > URL: https://issues.apache.org/jira/browse/YARN-10167 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Priority: Major > Labels: fs2cs, newbie > > Currently we just generated c-s.xml, but we didn't validate that. To make > sure the c-s.xml is correct after conversion, it's better to initialize the > CS scheduler using configs. > Also, in the test, we should try to leverage MockRM to validate generated > configs as much as we could. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10110) In Federation Secure cluster Application submission fails when authorization is enabled
[ https://issues.apache.org/jira/browse/YARN-10110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilwa S T updated YARN-10110: - Component/s: federation > In Federation Secure cluster Application submission fails when authorization > is enabled > --- > > Key: YARN-10110 > URL: https://issues.apache.org/jira/browse/YARN-10110 > Project: Hadoop YARN > Issue Type: Bug > Components: federation >Reporter: Sushanta Sen >Assignee: Bilwa S T >Priority: Blocker > Attachments: YARN-10110.001.patch, YARN-10110.002.patch > > > 【Precondition】: > 1. Secure Federated cluster is available > 2. Add the below configuration in Router and client core-site.xml > hadoop.security.authorization= true > 3. Restart the router service > 【Test step】: > 1. Go to router client bin path and submit a MR PI job > 2. Observe the client console screen > 【Expect Output】: > No error should be thrown and Job should be successful > 【Actual Output】: > Job failed prompting "Protocol interface > org.apache.hadoop.yarn.api.ApplicationClientProtocolPB is not known.," > 【Additional Note】: > But on setting the parameter as false, job is submitted and success. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-10168) FS-CS Convert: Converter tool doesn't handle min/max resource conversion correct
[ https://issues.apache.org/jira/browse/YARN-10168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046405#comment-17046405 ] Peter Bacsko edited comment on YARN-10168 at 2/27/20 9:33 AM: -- [~leftnoteasy] we really have to talk about this. _"In the existing FS2CS converter, when a percentage-based maximum resource is specified, the converter takes a global resource from fs2cs CLI, and applies percentages to that. It is not correct since the percentage-based value will get lost, and in the future when cluster resources go up and down, the maximum resource cannot be changed."_ That's true, but you can't define a vector of percentages to CS at the moment. That's why YARN-9936 was created, but unfortunately it hasn't been finished yet. So you can have only a single percentage. How do you deal with that if the input mem/vcore percentages are different? _"In FS, minResource defined the guaranteed resource, and weight defined how much the pie can grow to._ _So to me, in FS, we should pick and choose either weight or minResource to generate CS."_ {{}} is optional in FS. You don't always have it. The only thing that is mandatory is the weight. That's why weight was used as a starting point. _"In FS, mix-use of absolute-resource configs (like min/maxResource), and percentage-based (like weight) is allowed. But in CS, it is not allowed."_ That's weird. I was under the impression that for static queue configs, you can mix capacity and absolute resource. In this case, the verification of sum(caps) == 100.0 is skipped. So is this assumption false? was (Author: pbacsko): [~leftnoteasy] we really have to talk about this. _"In the existing FS2CS converter, when a percentage-based maximum resource is specified, the converter takes a global resource from fs2cs CLI, and applies percentages to that. It is not correct since the percentage-based value will get lost, and in the future when cluster resources go up and down, the maximum resource cannot be changed."_ That's true, but you can't define a vector of percentages to CS at the moment. That's why YARN-9936 was created, but unfortunately it hasn't been finished yet. So you can have only a single percentage. How do you deal with that if the input mem/vcore percentages are different? _"In FS, minResource defined the guaranteed resource, and weight defined how much the pie can grow to._ _So to me, in FS, we should pick and choose either weight or minResource to generate CS."_ {{}} is optional in FS. You don't always have it. The only thing that is mandatory is the weight. That's why weight was used as a starting point. _"In FS, mix-use of absolute-resource configs (like min/maxResource), and percentage-based (like weight) is allowed. But in CS, it is not allowed."_ That's weird. I was under the impression that for static queue configs, you can mix capacity and absolute resource. In this case, the verification of sum(caps) == 100.0 is skipped. So is this assumption false? > FS-CS Convert: Converter tool doesn't handle min/max resource conversion > correct > > > Key: YARN-10168 > URL: https://issues.apache.org/jira/browse/YARN-10168 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Priority: Blocker > > Trying to understand logics of convert min and max resource from FS to CS, > and found some issues: > 1) > In FSQueueConverter#emitMaximumCapacity > Existing logic in FS is to either specify a maximum percentage for queues > against cluster resources. Or, specify an absolute valued maximum resource. > In the existing FS2CS converter, when a percentage-based maximum resource is > specified, the converter takes a global resource from fs2cs CLI, and applies > percentages to that. It is not correct since the percentage-based value will > get lost, and in the future when cluster resources go up and down, the > maximum resource cannot be changed. > 2) > The logic to deal with min/weight resource is also questionable: > The existing fs2cs tool, it takes precedence of percentage over > absoluteResource, and could set both to a queue config. See > FSQueueConverter.Capacity#toString > However, in CS, comparing to FS, the weights/min resource is quite different: > CS use the same queue.capacity to specify both percentage-based or > absolute-resource-based configs (Similar to how FS deal with maximum > Resource). > The capacity defines guaranteed resource, which also impact fairshare of the > queue. (The more guaranteed resource a queue has, the larger "pie" the queue > can get if there's any additional available resource). > In FS, minResource defined the guaranteed resource, and weight defined how > much the pie can grow to. > So to me, in FS,
[jira] [Commented] (YARN-10168) FS-CS Convert: Converter tool doesn't handle min/max resource conversion correct
[ https://issues.apache.org/jira/browse/YARN-10168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046406#comment-17046406 ] Peter Bacsko commented on YARN-10168: - BTW is this really a blocker for the converter? > FS-CS Convert: Converter tool doesn't handle min/max resource conversion > correct > > > Key: YARN-10168 > URL: https://issues.apache.org/jira/browse/YARN-10168 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Priority: Blocker > > Trying to understand logics of convert min and max resource from FS to CS, > and found some issues: > 1) > In FSQueueConverter#emitMaximumCapacity > Existing logic in FS is to either specify a maximum percentage for queues > against cluster resources. Or, specify an absolute valued maximum resource. > In the existing FS2CS converter, when a percentage-based maximum resource is > specified, the converter takes a global resource from fs2cs CLI, and applies > percentages to that. It is not correct since the percentage-based value will > get lost, and in the future when cluster resources go up and down, the > maximum resource cannot be changed. > 2) > The logic to deal with min/weight resource is also questionable: > The existing fs2cs tool, it takes precedence of percentage over > absoluteResource, and could set both to a queue config. See > FSQueueConverter.Capacity#toString > However, in CS, comparing to FS, the weights/min resource is quite different: > CS use the same queue.capacity to specify both percentage-based or > absolute-resource-based configs (Similar to how FS deal with maximum > Resource). > The capacity defines guaranteed resource, which also impact fairshare of the > queue. (The more guaranteed resource a queue has, the larger "pie" the queue > can get if there's any additional available resource). > In FS, minResource defined the guaranteed resource, and weight defined how > much the pie can grow to. > So to me, in FS, we should pick and choose either weight or minResource to > generate CS. > 3) > In FS, mix-use of absolute-resource configs (like min/maxResource), and > percentage-based (like weight) is allowed. But in CS, it is not allowed. The > reason is discussed on YARN-5881, and find [a]Should we support specifying a > mix of percentage ... > The existing fs2cs doesn't handle the issue, which could set mixed absolute > resource and percentage-based resources. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10168) FS-CS Convert: Converter tool doesn't handle min/max resource conversion correct
[ https://issues.apache.org/jira/browse/YARN-10168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046405#comment-17046405 ] Peter Bacsko commented on YARN-10168: - [~leftnoteasy] we really have to talk about this. _"In the existing FS2CS converter, when a percentage-based maximum resource is specified, the converter takes a global resource from fs2cs CLI, and applies percentages to that. It is not correct since the percentage-based value will get lost, and in the future when cluster resources go up and down, the maximum resource cannot be changed."_ That's true, but you can't define a vector of percentages to CS at the moment. That's why YARN-9936 was created, but unfortunately it hasn't been finished yet. So you can have only a single percentage. How do you deal with that if the input mem/vcore percentages are different? _"In FS, minResource defined the guaranteed resource, and weight defined how much the pie can grow to._ _So to me, in FS, we should pick and choose either weight or minResource to generate CS."_ {{}} is optional in FS. You don't always have it. The only thing that is mandatory is the weight. That's why weight was used as a starting point. _"In FS, mix-use of absolute-resource configs (like min/maxResource), and percentage-based (like weight) is allowed. But in CS, it is not allowed."_ That's weird. I was under the impression that for static queue configs, you can mix capacity and absolute resource. In this case, the verification of sum(caps) == 100.0 is skipped. So is this assumption false? > FS-CS Convert: Converter tool doesn't handle min/max resource conversion > correct > > > Key: YARN-10168 > URL: https://issues.apache.org/jira/browse/YARN-10168 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Priority: Blocker > > Trying to understand logics of convert min and max resource from FS to CS, > and found some issues: > 1) > In FSQueueConverter#emitMaximumCapacity > Existing logic in FS is to either specify a maximum percentage for queues > against cluster resources. Or, specify an absolute valued maximum resource. > In the existing FS2CS converter, when a percentage-based maximum resource is > specified, the converter takes a global resource from fs2cs CLI, and applies > percentages to that. It is not correct since the percentage-based value will > get lost, and in the future when cluster resources go up and down, the > maximum resource cannot be changed. > 2) > The logic to deal with min/weight resource is also questionable: > The existing fs2cs tool, it takes precedence of percentage over > absoluteResource, and could set both to a queue config. See > FSQueueConverter.Capacity#toString > However, in CS, comparing to FS, the weights/min resource is quite different: > CS use the same queue.capacity to specify both percentage-based or > absolute-resource-based configs (Similar to how FS deal with maximum > Resource). > The capacity defines guaranteed resource, which also impact fairshare of the > queue. (The more guaranteed resource a queue has, the larger "pie" the queue > can get if there's any additional available resource). > In FS, minResource defined the guaranteed resource, and weight defined how > much the pie can grow to. > So to me, in FS, we should pick and choose either weight or minResource to > generate CS. > 3) > In FS, mix-use of absolute-resource configs (like min/maxResource), and > percentage-based (like weight) is allowed. But in CS, it is not allowed. The > reason is discussed on YARN-5881, and find [a]Should we support specifying a > mix of percentage ... > The existing fs2cs doesn't handle the issue, which could set mixed absolute > resource and percentage-based resources. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-10148) Add Unit test for queue ACL for both FS and CS
[ https://issues.apache.org/jira/browse/YARN-10148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046397#comment-17046397 ] Szilard Nemeth edited comment on YARN-10148 at 2/27/20 9:31 AM: Thanks [~kmarton] for your patch, A couple of comments: *In QueueACLTestBase: * 1. QueueACLsTestBase#checkAccess: This method contains 3 code blocks that are very similar. The method should have a body that contains assertions for administer and submit access: {code} Assert.assertEquals( String.format(failureMsg, QueueACL.ADMINISTER_QUEUE, "root"), rootAccess, resourceManager.getResourceScheduler() .checkAccess(user, QueueACL.ADMINISTER_QUEUE, "root")); Assert.assertEquals( String.format(failureMsg, QueueACL.SUBMIT_APPLICATIONS, "root"), rootAccess, resourceManager.getResourceScheduler() .checkAccess(user, QueueACL.SUBMIT_APPLICATIONS, "root")); {code} I'll let you decide if you want to "hardcode" the String.format call in the extracted method (as it is the same for all 3 calls) or providing it as a parameter. The main point is that the queue name (let it be "root" or a method call that returns the name of the queue like "getQueueD()") should be given as a parameter. You can utilize a Supplier as a parameter: https://www.baeldung.com/java-8-functional-interfaces#Suppliers 2. Can you please add explanation as a javadoc for all the new testcases added to QueueACLTestBase? For me it's not very straightforward and easy to understand them. 3. Nit: In TestCapacitySchedulerQueueACLs#updateConfigWithDAndD1Queues: Please use uppercase "ACL" in the javadoc. 4. In TestCapacitySchedulerQueueACLs#updateConfigWithDAndD1Queues: Local variables cPath, c1Path should be named dPath, d1Path, right? 5. In TestCapacitySchedulerQueueACLs#updateConfigWithDAndD1Queues: To make the whole thing more easy to read, you could extract a helper method that sets a capacity for a queue. {code} csConf.setCapacity(CapacitySchedulerConfiguration.ROOT + "." + QUEUEA, 30f); {code} Having one parameter that could be the name of the leaf queue and the method could have appended the full queue path to it. Of course, you can use the full queue path as a param if you prefer that. 6. A very similar thing to 5. : Could you extract a method that sets the admin and submit ACL together? You are always calling these together: {code} csConf.setAcl(cPath, QueueACL.ADMINISTER_QUEUE, queueDAcl); csConf.setAcl(cPath, QueueACL.SUBMIT_APPLICATIONS, queueDAcl); {code} 7. Nit: In TestCapacitySchedulerQueueACLs#updateConfigWithDAndD1Queues: The if conditions at the end of the method are oddly formatted (no space between if and parentheses). 8. Nit: TestFairSchedulerQueueACLs#updateConfigWithDAndD1Queues: Please use uppercase ACL in the javadoc. was (Author: snemeth): Thanks [~kmarton] for your patch, A couple of comments: Hi Kinga, *In QueueACLTestBase: * 1. QueueACLsTestBase#checkAccess: This method contains 3 code blocks that are very similar. The method should have a body that contains assertions for administer and submit access: {code} Assert.assertEquals( String.format(failureMsg, QueueACL.ADMINISTER_QUEUE, "root"), rootAccess, resourceManager.getResourceScheduler() .checkAccess(user, QueueACL.ADMINISTER_QUEUE, "root")); Assert.assertEquals( String.format(failureMsg, QueueACL.SUBMIT_APPLICATIONS, "root"), rootAccess, resourceManager.getResourceScheduler() .checkAccess(user, QueueACL.SUBMIT_APPLICATIONS, "root")); {code} I'll let you decide if you want to "hardcode" the String.format call in the extracted method (as it is the same for all 3 calls) or providing it as a parameter. The main point is that the queue name (let it be "root" or a method call that returns the name of the queue like "getQueueD()") should be given as a parameter. You can utilize a Supplier as a parameter: https://www.baeldung.com/java-8-functional-interfaces#Suppliers 2. Can you please add explanation as a javadoc for all the new testcases added to QueueACLTestBase? For me it's not very straightforward and easy to understand them. 3. Nit: In TestCapacitySchedulerQueueACLs#updateConfigWithDAndD1Queues: Please use uppercase "ACL" in the javadoc. 4. In TestCapacitySchedulerQueueACLs#updateConfigWithDAndD1Queues: Local variables cPath, c1Path should be named dPath, d1Path, right? 5. In TestCapacitySchedulerQueueACLs#updateConfigWithDAndD1Queues: To make the whole thing more easy to read, you could extract a helper method that sets a capacity for a queue. {code} csConf.setCapacity(CapacitySchedulerConfiguration.ROOT + "." + QUEUEA, 30f); {code} Having one parameter that could be the name of the leaf queue and the method could have appended the full queue path to it. Of course, you can use the full queue path as a param if you
[jira] [Commented] (YARN-10148) Add Unit test for queue ACL for both FS and CS
[ https://issues.apache.org/jira/browse/YARN-10148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046397#comment-17046397 ] Szilard Nemeth commented on YARN-10148: --- Thanks [~kmarton] for your patch, A couple of comments: Hi Kinga, *In QueueACLTestBase: * 1. QueueACLsTestBase#checkAccess: This method contains 3 code blocks that are very similar. The method should have a body that contains assertions for administer and submit access: {code} Assert.assertEquals( String.format(failureMsg, QueueACL.ADMINISTER_QUEUE, "root"), rootAccess, resourceManager.getResourceScheduler() .checkAccess(user, QueueACL.ADMINISTER_QUEUE, "root")); Assert.assertEquals( String.format(failureMsg, QueueACL.SUBMIT_APPLICATIONS, "root"), rootAccess, resourceManager.getResourceScheduler() .checkAccess(user, QueueACL.SUBMIT_APPLICATIONS, "root")); {code} I'll let you decide if you want to "hardcode" the String.format call in the extracted method (as it is the same for all 3 calls) or providing it as a parameter. The main point is that the queue name (let it be "root" or a method call that returns the name of the queue like "getQueueD()") should be given as a parameter. You can utilize a Supplier as a parameter: https://www.baeldung.com/java-8-functional-interfaces#Suppliers 2. Can you please add explanation as a javadoc for all the new testcases added to QueueACLTestBase? For me it's not very straightforward and easy to understand them. 3. Nit: In TestCapacitySchedulerQueueACLs#updateConfigWithDAndD1Queues: Please use uppercase "ACL" in the javadoc. 4. In TestCapacitySchedulerQueueACLs#updateConfigWithDAndD1Queues: Local variables cPath, c1Path should be named dPath, d1Path, right? 5. In TestCapacitySchedulerQueueACLs#updateConfigWithDAndD1Queues: To make the whole thing more easy to read, you could extract a helper method that sets a capacity for a queue. {code} csConf.setCapacity(CapacitySchedulerConfiguration.ROOT + "." + QUEUEA, 30f); {code} Having one parameter that could be the name of the leaf queue and the method could have appended the full queue path to it. Of course, you can use the full queue path as a param if you prefer that. 6. A very similar thing to 5. : Could you extract a method that sets the admin and submit ACL together? You are always calling these together: {code} csConf.setAcl(cPath, QueueACL.ADMINISTER_QUEUE, queueDAcl); csConf.setAcl(cPath, QueueACL.SUBMIT_APPLICATIONS, queueDAcl); {code} 7. Nit: In TestCapacitySchedulerQueueACLs#updateConfigWithDAndD1Queues: The if conditions at the end of the method are oddly formatted (no space between if and parentheses). 8. Nit: TestFairSchedulerQueueACLs#updateConfigWithDAndD1Queues: Please use uppercase ACL in the javadoc. > Add Unit test for queue ACL for both FS and CS > -- > > Key: YARN-10148 > URL: https://issues.apache.org/jira/browse/YARN-10148 > Project: Hadoop YARN > Issue Type: Improvement > Components: scheduler >Reporter: Kinga Marton >Assignee: Kinga Marton >Priority: Major > Attachments: YARN-10148.001.patch, YARN-10148.002.patch, > YARN-10148.003.patch, YARN-10148.004.patch > > > Add some unit tests covering the queue ACL evaluation for both FS and CS. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10167) FS-CS Converter: Need validate c-s.xml after converting
[ https://issues.apache.org/jira/browse/YARN-10167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046312#comment-17046312 ] Peter Bacsko commented on YARN-10167: - I think the command can be part of {{yarn fs2cs}} tool, with a switch, like {{--validate-cs-config}} or whatever. > FS-CS Converter: Need validate c-s.xml after converting > --- > > Key: YARN-10167 > URL: https://issues.apache.org/jira/browse/YARN-10167 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Priority: Major > Labels: fs2cs, newbie > > Currently we just generated c-s.xml, but we didn't validate that. To make > sure the c-s.xml is correct after conversion, it's better to initialize the > CS scheduler using configs. > Also, in the test, we should try to leverage MockRM to validate generated > configs as much as we could. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org