[jira] [Commented] (YARN-9990) Testcase fails with "Insufficient configured threads: required=16 < max=10"
[ https://issues.apache.org/jira/browse/YARN-9990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984756#comment-16984756 ] Hudson commented on YARN-9990: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17711 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17711/]) YARN-9990. Testcase fails with Insufficient configured threads: (abmodi: rev a2dadac790ae3b4e3ab411be84d909d18af33f6e) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/client/TestSecureApiServiceClient.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/amfilter/TestAmFilter.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/TestWebAppProxyServlet.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/client/TestApiServiceClient.java > Testcase fails with "Insufficient configured threads: required=16 < max=10" > --- > > Key: YARN-9990 > URL: https://issues.apache.org/jira/browse/YARN-9990 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-9990-001.patch > > > Testcase fails with "Insufficient configured threads: required=16 < max=10". > Below testcases failing > 1. TestWebAppProxyServlet > 2. TestAmFilter > 3. TestApiServiceClient > 4. TestSecureApiServiceClient > {code} > [ERROR] org.apache.hadoop.yarn.server.webproxy.TestWebAppProxyServlet Time > elapsed: 0.396 s <<< ERROR! > java.lang.IllegalStateException: Insufficient configured threads: required=16 > < max=10 for > QueuedThreadPool[qtp1597249648]@5f341870{STARTED,8<=8<=10,i=8,r=1,q=0}[ReservedThreadExecutor@4c762604{s=0/1,p=0}] > at > org.eclipse.jetty.util.thread.ThreadPoolBudget.check(ThreadPoolBudget.java:156) > at > org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseTo(ThreadPoolBudget.java:130) > at > org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseFrom(ThreadPoolBudget.java:182) > at > org.eclipse.jetty.io.SelectorManager.doStart(SelectorManager.java:255) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110) > at > org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:283) > at > org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81) > at > org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:231) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72) > at org.eclipse.jetty.server.Server.doStart(Server.java:385) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72) > at > org.apache.hadoop.yarn.server.webproxy.TestWebAppProxyServlet.start(TestWebAppProxyServlet.java:102) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at >
[jira] [Commented] (YARN-9990) Testcase fails with "Insufficient configured threads: required=16 < max=10"
[ https://issues.apache.org/jira/browse/YARN-9990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984753#comment-16984753 ] Abhishek Modi commented on YARN-9990: - Committed to trunk. Thanks [~prabhujoseph] for the patch. > Testcase fails with "Insufficient configured threads: required=16 < max=10" > --- > > Key: YARN-9990 > URL: https://issues.apache.org/jira/browse/YARN-9990 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-9990-001.patch > > > Testcase fails with "Insufficient configured threads: required=16 < max=10". > Below testcases failing > 1. TestWebAppProxyServlet > 2. TestAmFilter > 3. TestApiServiceClient > 4. TestSecureApiServiceClient > {code} > [ERROR] org.apache.hadoop.yarn.server.webproxy.TestWebAppProxyServlet Time > elapsed: 0.396 s <<< ERROR! > java.lang.IllegalStateException: Insufficient configured threads: required=16 > < max=10 for > QueuedThreadPool[qtp1597249648]@5f341870{STARTED,8<=8<=10,i=8,r=1,q=0}[ReservedThreadExecutor@4c762604{s=0/1,p=0}] > at > org.eclipse.jetty.util.thread.ThreadPoolBudget.check(ThreadPoolBudget.java:156) > at > org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseTo(ThreadPoolBudget.java:130) > at > org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseFrom(ThreadPoolBudget.java:182) > at > org.eclipse.jetty.io.SelectorManager.doStart(SelectorManager.java:255) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110) > at > org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:283) > at > org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81) > at > org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:231) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72) > at org.eclipse.jetty.server.Server.doStart(Server.java:385) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72) > at > org.apache.hadoop.yarn.server.webproxy.TestWebAppProxyServlet.start(TestWebAppProxyServlet.java:102) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > [INFO] Running org.apache.hadoop.yarn.server.webproxy.amfilter.TestAmFilter > [ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.326 > s <<< FAILURE! - in > org.apache.hadoop.yarn.server.webproxy.amfilter.TestAmFilter > [ERROR] > testFindRedirectUrl(org.apache.hadoop.yarn.server.webproxy.amfilter.TestAmFilter) > Time elapsed: 0.306 s <<< ERROR! > java.lang.IllegalStateException: Insufficient configured threads: required=16 > < max=10 for > QueuedThreadPool[qtp485041780]@1ce92674{STARTED,8<=8<=10,i=8,r=1,q=0}[ReservedThreadExecutor@31f924f5{s=0/1,p=0}] > at >
[jira] [Commented] (YARN-9938) Validate Parent Queue for QueueMapping contains dynamic group as parent queue
[ https://issues.apache.org/jira/browse/YARN-9938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984742#comment-16984742 ] Hadoop QA commented on YARN-9938: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 59s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 43 unchanged - 1 fixed = 43 total (was 44) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 33s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 85m 25s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}141m 18s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | YARN-9938 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12987113/YARN-9938.006.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 23c614de9e50 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 44f7b91 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25245/testReport/ | | Max. process+thread count | 822 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/25245/console | | Powered by | Apache
[jira] [Commented] (YARN-9938) Validate Parent Queue for QueueMapping contains dynamic group as parent queue
[ https://issues.apache.org/jira/browse/YARN-9938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984712#comment-16984712 ] Manikandan R commented on YARN-9938: Fixed whitespace issues. > Validate Parent Queue for QueueMapping contains dynamic group as parent queue > - > > Key: YARN-9938 > URL: https://issues.apache.org/jira/browse/YARN-9938 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Manikandan R >Assignee: Manikandan R >Priority: Major > Attachments: YARN-9938.001.patch, YARN-9938.002.patch, > YARN-9938.003.patch, YARN-9938.004.patch, YARN-9938.005.patch, > YARN-9938.006.patch > > > Currently \{{UserGroupMappingPlacementRule#validateParentQueue}} validates > the parent queue using queue path. With dynamic group using %primary_group > and %secondary_group in place (Refer YARN-9841 and YARN-9865) , parent queue > validation should also happen for these above 2 queue mappings after > resolving the above wildcard pattern to corresponding groups at runtime. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9938) Validate Parent Queue for QueueMapping contains dynamic group as parent queue
[ https://issues.apache.org/jira/browse/YARN-9938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manikandan R updated YARN-9938: --- Attachment: YARN-9938.006.patch > Validate Parent Queue for QueueMapping contains dynamic group as parent queue > - > > Key: YARN-9938 > URL: https://issues.apache.org/jira/browse/YARN-9938 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Manikandan R >Assignee: Manikandan R >Priority: Major > Attachments: YARN-9938.001.patch, YARN-9938.002.patch, > YARN-9938.003.patch, YARN-9938.004.patch, YARN-9938.005.patch, > YARN-9938.006.patch > > > Currently \{{UserGroupMappingPlacementRule#validateParentQueue}} validates > the parent queue using queue path. With dynamic group using %primary_group > and %secondary_group in place (Refer YARN-9841 and YARN-9865) , parent queue > validation should also happen for these above 2 queue mappings after > resolving the above wildcard pattern to corresponding groups at runtime. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9985) Unsupported "transitionToObserver" option displaying for rmadmin command
[ https://issues.apache.org/jira/browse/YARN-9985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984638#comment-16984638 ] Hadoop QA commented on YARN-9985: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 31s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 16s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 26m 10s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 82m 25s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.client.cli.TestRMAdminCLI | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | YARN-9985 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12987095/YARN-9985-01.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7efbb3988542 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 44f7b91 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/25244/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25244/testReport/ | | Max. process+thread count | 531 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/25244/console | | Powered by | Apache
[jira] [Updated] (YARN-9985) Unsupported "transitionToObserver" option displaying for rmadmin command
[ https://issues.apache.org/jira/browse/YARN-9985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated YARN-9985: --- Summary: Unsupported "transitionToObserver" option displaying for rmadmin command (was: YARN: Unsupported "transitionToObserver" option displaying for rmadmin command) > Unsupported "transitionToObserver" option displaying for rmadmin command > > > Key: YARN-9985 > URL: https://issues.apache.org/jira/browse/YARN-9985 > Project: Hadoop YARN > Issue Type: Bug > Components: RM, yarn >Affects Versions: 3.2.1 >Reporter: Souryakanta Dwivedy >Assignee: Ayush Saxena >Priority: Minor > Attachments: YARN-9985-01.patch, image-2019-11-18-18-31-17-755.png, > image-2019-11-18-18-35-54-688.png > > > Unsupported "transitionToObserver" option displaying for rmadmin command > Check the options for Yarn rmadmin command > It will display the "-transitionToObserver " option which is not > supported > by yarn rmadmin command which is wrong behavior. > But if you check the yarn rmadmin -help it will not display any option > "-transitionToObserver " > > !image-2019-11-18-18-31-17-755.png! > > == > install/hadoop/resourcemanager/bin> ./yarn rmadmin -help > rmadmin is the command to execute YARN administrative commands. > The full syntax is: > yarn rmadmin [-refreshQueues] [-refreshNodes [-g|graceful [timeout in > seconds] -client|server]] [-refreshNodesResources] > [-refreshSuperUserGroupsConfiguration] [-refreshUserToGroupsMappings] > [-refreshAdminAcls] [-refreshServiceAcl] [-getGroup [username]] > [-addToClusterNodeLabels > <"label1(exclusive=true),label2(exclusive=false),label3">] > [-removeFromClusterNodeLabels ] [-replaceLabelsOnNode > <"node1[:port]=label1,label2 node2[:port]=label1"> [-failOnUnknownNodes]] > [-directlyAccessNodeLabelStore] [-refreshClusterMaxPriority] > [-updateNodeResource [NodeID] [MemSize] [vCores] ([OvercommitTimeout]) or > -updateNodeResource [NodeID] [ResourceTypes] ([OvercommitTimeout])] > *{color:#FF}[-transitionToActive [--forceactive] ]{color} > {color:#FF}[-transitionToStandby ]{color}* [-getServiceState > ] [-getAllServiceState] [-checkHealth ] [-help [cmd]] > -refreshQueues: Reload the queues' acls, states and scheduler specific > properties. > ResourceManager will reload the mapred-queues configuration file. > -refreshNodes [-g|graceful [timeout in seconds] -client|server]: Refresh the > hosts information at the ResourceManager. Here [-g|graceful [timeout in > seconds] -client|server] is optional, if we specify the timeout then > ResourceManager will wait for timeout before marking the NodeManager as > decommissioned. The -client|server indicates if the timeout tracking should > be handled by the client or the ResourceManager. The client-side tracking is > blocking, while the server-side tracking is not. Omitting the timeout, or a > timeout of -1, indicates an infinite timeout. Known Issue: the server-side > tracking will immediately decommission if an RM HA failover occurs. > -refreshNodesResources: Refresh resources of NodeManagers at the > ResourceManager. > -refreshSuperUserGroupsConfiguration: Refresh superuser proxy groups mappings > -refreshUserToGroupsMappings: Refresh user-to-groups mappings > -refreshAdminAcls: Refresh acls for administration of ResourceManager > -refreshServiceAcl: Reload the service-level authorization policy file. > ResourceManager will reload the authorization policy file. > -getGroups [username]: Get the groups which given user belongs to. > -addToClusterNodeLabels > <"label1(exclusive=true),label2(exclusive=false),label3">: add to cluster > node labels. Default exclusivity is true > -removeFromClusterNodeLabels (label splitted by ","): > remove from cluster node labels > -replaceLabelsOnNode <"node1[:port]=label1,label2 > node2[:port]=label1,label2"> [-failOnUnknownNodes] : replace labels on nodes > (please note that we do not support specifying multiple labels on a single > host for now.) > [-failOnUnknownNodes] is optional, when we set this option, it will fail if > specified nodes are unknown. > -directlyAccessNodeLabelStore: This is DEPRECATED, will be removed in future > releases. Directly access node label store, with this option, all node label > related operations will not connect RM. Instead, they will access/modify > stored node labels directly. By default, it is false (access via RM). AND > PLEASE NOTE: if you configured yarn.node-labels.fs-store.root-dir to a local > directory (instead of NFS or HDFS), this option will only work when the > command run on the machine where RM is running. > -refreshClusterMaxPriority: Refresh cluster max priority > -updateNodeResource [NodeID]
[jira] [Commented] (YARN-9985) Unsupported "transitionToObserver" option displaying for rmadmin command
[ https://issues.apache.org/jira/browse/YARN-9985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984617#comment-16984617 ] Ayush Saxena commented on YARN-9985: Thanx [~SouryakantaDwivedy] for the report. Uploaded patch with the fix, Pls Review!!! > Unsupported "transitionToObserver" option displaying for rmadmin command > > > Key: YARN-9985 > URL: https://issues.apache.org/jira/browse/YARN-9985 > Project: Hadoop YARN > Issue Type: Bug > Components: RM, yarn >Affects Versions: 3.2.1 >Reporter: Souryakanta Dwivedy >Assignee: Ayush Saxena >Priority: Minor > Attachments: YARN-9985-01.patch, image-2019-11-18-18-31-17-755.png, > image-2019-11-18-18-35-54-688.png > > > Unsupported "transitionToObserver" option displaying for rmadmin command > Check the options for Yarn rmadmin command > It will display the "-transitionToObserver " option which is not > supported > by yarn rmadmin command which is wrong behavior. > But if you check the yarn rmadmin -help it will not display any option > "-transitionToObserver " > > !image-2019-11-18-18-31-17-755.png! > > == > install/hadoop/resourcemanager/bin> ./yarn rmadmin -help > rmadmin is the command to execute YARN administrative commands. > The full syntax is: > yarn rmadmin [-refreshQueues] [-refreshNodes [-g|graceful [timeout in > seconds] -client|server]] [-refreshNodesResources] > [-refreshSuperUserGroupsConfiguration] [-refreshUserToGroupsMappings] > [-refreshAdminAcls] [-refreshServiceAcl] [-getGroup [username]] > [-addToClusterNodeLabels > <"label1(exclusive=true),label2(exclusive=false),label3">] > [-removeFromClusterNodeLabels ] [-replaceLabelsOnNode > <"node1[:port]=label1,label2 node2[:port]=label1"> [-failOnUnknownNodes]] > [-directlyAccessNodeLabelStore] [-refreshClusterMaxPriority] > [-updateNodeResource [NodeID] [MemSize] [vCores] ([OvercommitTimeout]) or > -updateNodeResource [NodeID] [ResourceTypes] ([OvercommitTimeout])] > *{color:#FF}[-transitionToActive [--forceactive] ]{color} > {color:#FF}[-transitionToStandby ]{color}* [-getServiceState > ] [-getAllServiceState] [-checkHealth ] [-help [cmd]] > -refreshQueues: Reload the queues' acls, states and scheduler specific > properties. > ResourceManager will reload the mapred-queues configuration file. > -refreshNodes [-g|graceful [timeout in seconds] -client|server]: Refresh the > hosts information at the ResourceManager. Here [-g|graceful [timeout in > seconds] -client|server] is optional, if we specify the timeout then > ResourceManager will wait for timeout before marking the NodeManager as > decommissioned. The -client|server indicates if the timeout tracking should > be handled by the client or the ResourceManager. The client-side tracking is > blocking, while the server-side tracking is not. Omitting the timeout, or a > timeout of -1, indicates an infinite timeout. Known Issue: the server-side > tracking will immediately decommission if an RM HA failover occurs. > -refreshNodesResources: Refresh resources of NodeManagers at the > ResourceManager. > -refreshSuperUserGroupsConfiguration: Refresh superuser proxy groups mappings > -refreshUserToGroupsMappings: Refresh user-to-groups mappings > -refreshAdminAcls: Refresh acls for administration of ResourceManager > -refreshServiceAcl: Reload the service-level authorization policy file. > ResourceManager will reload the authorization policy file. > -getGroups [username]: Get the groups which given user belongs to. > -addToClusterNodeLabels > <"label1(exclusive=true),label2(exclusive=false),label3">: add to cluster > node labels. Default exclusivity is true > -removeFromClusterNodeLabels (label splitted by ","): > remove from cluster node labels > -replaceLabelsOnNode <"node1[:port]=label1,label2 > node2[:port]=label1,label2"> [-failOnUnknownNodes] : replace labels on nodes > (please note that we do not support specifying multiple labels on a single > host for now.) > [-failOnUnknownNodes] is optional, when we set this option, it will fail if > specified nodes are unknown. > -directlyAccessNodeLabelStore: This is DEPRECATED, will be removed in future > releases. Directly access node label store, with this option, all node label > related operations will not connect RM. Instead, they will access/modify > stored node labels directly. By default, it is false (access via RM). AND > PLEASE NOTE: if you configured yarn.node-labels.fs-store.root-dir to a local > directory (instead of NFS or HDFS), this option will only work when the > command run on the machine where RM is running. > -refreshClusterMaxPriority: Refresh cluster max priority > -updateNodeResource [NodeID] [MemSize] [vCores] ([OvercommitTimeout]) > or >
[jira] [Updated] (YARN-9985) YARN: Unsupported "transitionToObserver" option displaying for rmadmin command
[ https://issues.apache.org/jira/browse/YARN-9985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated YARN-9985: --- Attachment: YARN-9985-01.patch > YARN: Unsupported "transitionToObserver" option displaying for rmadmin command > -- > > Key: YARN-9985 > URL: https://issues.apache.org/jira/browse/YARN-9985 > Project: Hadoop YARN > Issue Type: Bug > Components: RM, yarn >Affects Versions: 3.2.1 >Reporter: Souryakanta Dwivedy >Priority: Minor > Attachments: YARN-9985-01.patch, image-2019-11-18-18-31-17-755.png, > image-2019-11-18-18-35-54-688.png > > > Unsupported "transitionToObserver" option displaying for rmadmin command > Check the options for Yarn rmadmin command > It will display the "-transitionToObserver " option which is not > supported > by yarn rmadmin command which is wrong behavior. > But if you check the yarn rmadmin -help it will not display any option > "-transitionToObserver " > > !image-2019-11-18-18-31-17-755.png! > > == > install/hadoop/resourcemanager/bin> ./yarn rmadmin -help > rmadmin is the command to execute YARN administrative commands. > The full syntax is: > yarn rmadmin [-refreshQueues] [-refreshNodes [-g|graceful [timeout in > seconds] -client|server]] [-refreshNodesResources] > [-refreshSuperUserGroupsConfiguration] [-refreshUserToGroupsMappings] > [-refreshAdminAcls] [-refreshServiceAcl] [-getGroup [username]] > [-addToClusterNodeLabels > <"label1(exclusive=true),label2(exclusive=false),label3">] > [-removeFromClusterNodeLabels ] [-replaceLabelsOnNode > <"node1[:port]=label1,label2 node2[:port]=label1"> [-failOnUnknownNodes]] > [-directlyAccessNodeLabelStore] [-refreshClusterMaxPriority] > [-updateNodeResource [NodeID] [MemSize] [vCores] ([OvercommitTimeout]) or > -updateNodeResource [NodeID] [ResourceTypes] ([OvercommitTimeout])] > *{color:#FF}[-transitionToActive [--forceactive] ]{color} > {color:#FF}[-transitionToStandby ]{color}* [-getServiceState > ] [-getAllServiceState] [-checkHealth ] [-help [cmd]] > -refreshQueues: Reload the queues' acls, states and scheduler specific > properties. > ResourceManager will reload the mapred-queues configuration file. > -refreshNodes [-g|graceful [timeout in seconds] -client|server]: Refresh the > hosts information at the ResourceManager. Here [-g|graceful [timeout in > seconds] -client|server] is optional, if we specify the timeout then > ResourceManager will wait for timeout before marking the NodeManager as > decommissioned. The -client|server indicates if the timeout tracking should > be handled by the client or the ResourceManager. The client-side tracking is > blocking, while the server-side tracking is not. Omitting the timeout, or a > timeout of -1, indicates an infinite timeout. Known Issue: the server-side > tracking will immediately decommission if an RM HA failover occurs. > -refreshNodesResources: Refresh resources of NodeManagers at the > ResourceManager. > -refreshSuperUserGroupsConfiguration: Refresh superuser proxy groups mappings > -refreshUserToGroupsMappings: Refresh user-to-groups mappings > -refreshAdminAcls: Refresh acls for administration of ResourceManager > -refreshServiceAcl: Reload the service-level authorization policy file. > ResourceManager will reload the authorization policy file. > -getGroups [username]: Get the groups which given user belongs to. > -addToClusterNodeLabels > <"label1(exclusive=true),label2(exclusive=false),label3">: add to cluster > node labels. Default exclusivity is true > -removeFromClusterNodeLabels (label splitted by ","): > remove from cluster node labels > -replaceLabelsOnNode <"node1[:port]=label1,label2 > node2[:port]=label1,label2"> [-failOnUnknownNodes] : replace labels on nodes > (please note that we do not support specifying multiple labels on a single > host for now.) > [-failOnUnknownNodes] is optional, when we set this option, it will fail if > specified nodes are unknown. > -directlyAccessNodeLabelStore: This is DEPRECATED, will be removed in future > releases. Directly access node label store, with this option, all node label > related operations will not connect RM. Instead, they will access/modify > stored node labels directly. By default, it is false (access via RM). AND > PLEASE NOTE: if you configured yarn.node-labels.fs-store.root-dir to a local > directory (instead of NFS or HDFS), this option will only work when the > command run on the machine where RM is running. > -refreshClusterMaxPriority: Refresh cluster max priority > -updateNodeResource [NodeID] [MemSize] [vCores] ([OvercommitTimeout]) > or > [NodeID] [resourcetypes] ([OvercommitTimeout]). : Update resource on > specific node. > -transitionToActive
[jira] [Assigned] (YARN-9985) YARN: Unsupported "transitionToObserver" option displaying for rmadmin command
[ https://issues.apache.org/jira/browse/YARN-9985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena reassigned YARN-9985: -- Assignee: Ayush Saxena > YARN: Unsupported "transitionToObserver" option displaying for rmadmin command > -- > > Key: YARN-9985 > URL: https://issues.apache.org/jira/browse/YARN-9985 > Project: Hadoop YARN > Issue Type: Bug > Components: RM, yarn >Affects Versions: 3.2.1 >Reporter: Souryakanta Dwivedy >Assignee: Ayush Saxena >Priority: Minor > Attachments: YARN-9985-01.patch, image-2019-11-18-18-31-17-755.png, > image-2019-11-18-18-35-54-688.png > > > Unsupported "transitionToObserver" option displaying for rmadmin command > Check the options for Yarn rmadmin command > It will display the "-transitionToObserver " option which is not > supported > by yarn rmadmin command which is wrong behavior. > But if you check the yarn rmadmin -help it will not display any option > "-transitionToObserver " > > !image-2019-11-18-18-31-17-755.png! > > == > install/hadoop/resourcemanager/bin> ./yarn rmadmin -help > rmadmin is the command to execute YARN administrative commands. > The full syntax is: > yarn rmadmin [-refreshQueues] [-refreshNodes [-g|graceful [timeout in > seconds] -client|server]] [-refreshNodesResources] > [-refreshSuperUserGroupsConfiguration] [-refreshUserToGroupsMappings] > [-refreshAdminAcls] [-refreshServiceAcl] [-getGroup [username]] > [-addToClusterNodeLabels > <"label1(exclusive=true),label2(exclusive=false),label3">] > [-removeFromClusterNodeLabels ] [-replaceLabelsOnNode > <"node1[:port]=label1,label2 node2[:port]=label1"> [-failOnUnknownNodes]] > [-directlyAccessNodeLabelStore] [-refreshClusterMaxPriority] > [-updateNodeResource [NodeID] [MemSize] [vCores] ([OvercommitTimeout]) or > -updateNodeResource [NodeID] [ResourceTypes] ([OvercommitTimeout])] > *{color:#FF}[-transitionToActive [--forceactive] ]{color} > {color:#FF}[-transitionToStandby ]{color}* [-getServiceState > ] [-getAllServiceState] [-checkHealth ] [-help [cmd]] > -refreshQueues: Reload the queues' acls, states and scheduler specific > properties. > ResourceManager will reload the mapred-queues configuration file. > -refreshNodes [-g|graceful [timeout in seconds] -client|server]: Refresh the > hosts information at the ResourceManager. Here [-g|graceful [timeout in > seconds] -client|server] is optional, if we specify the timeout then > ResourceManager will wait for timeout before marking the NodeManager as > decommissioned. The -client|server indicates if the timeout tracking should > be handled by the client or the ResourceManager. The client-side tracking is > blocking, while the server-side tracking is not. Omitting the timeout, or a > timeout of -1, indicates an infinite timeout. Known Issue: the server-side > tracking will immediately decommission if an RM HA failover occurs. > -refreshNodesResources: Refresh resources of NodeManagers at the > ResourceManager. > -refreshSuperUserGroupsConfiguration: Refresh superuser proxy groups mappings > -refreshUserToGroupsMappings: Refresh user-to-groups mappings > -refreshAdminAcls: Refresh acls for administration of ResourceManager > -refreshServiceAcl: Reload the service-level authorization policy file. > ResourceManager will reload the authorization policy file. > -getGroups [username]: Get the groups which given user belongs to. > -addToClusterNodeLabels > <"label1(exclusive=true),label2(exclusive=false),label3">: add to cluster > node labels. Default exclusivity is true > -removeFromClusterNodeLabels (label splitted by ","): > remove from cluster node labels > -replaceLabelsOnNode <"node1[:port]=label1,label2 > node2[:port]=label1,label2"> [-failOnUnknownNodes] : replace labels on nodes > (please note that we do not support specifying multiple labels on a single > host for now.) > [-failOnUnknownNodes] is optional, when we set this option, it will fail if > specified nodes are unknown. > -directlyAccessNodeLabelStore: This is DEPRECATED, will be removed in future > releases. Directly access node label store, with this option, all node label > related operations will not connect RM. Instead, they will access/modify > stored node labels directly. By default, it is false (access via RM). AND > PLEASE NOTE: if you configured yarn.node-labels.fs-store.root-dir to a local > directory (instead of NFS or HDFS), this option will only work when the > command run on the machine where RM is running. > -refreshClusterMaxPriority: Refresh cluster max priority > -updateNodeResource [NodeID] [MemSize] [vCores] ([OvercommitTimeout]) > or > [NodeID] [resourcetypes] ([OvercommitTimeout]). : Update resource on > specific
[jira] [Commented] (YARN-9938) Validate Parent Queue for QueueMapping contains dynamic group as parent queue
[ https://issues.apache.org/jira/browse/YARN-9938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984610#comment-16984610 ] Hadoop QA commented on YARN-9938: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 38s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 6s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 43 unchanged - 1 fixed = 43 total (was 44) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 39s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 86m 16s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}143m 23s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | YARN-9938 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12987083/YARN-9938.005.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 28a666beb21d 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 46166bd | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | whitespace | https://builds.apache.org/job/PreCommit-YARN-Build/25243/artifact/out/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25243/testReport/ | | Max. process+thread count | 821 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U:
[jira] [Commented] (YARN-5106) Provide a builder interface for FairScheduler allocations for use in tests
[ https://issues.apache.org/jira/browse/YARN-5106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984596#comment-16984596 ] Hadoop QA commented on YARN-5106: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 41s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 29 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 52s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 5s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 14s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 19s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 5 new + 278 unchanged - 52 fixed = 283 total (was 330) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 40s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 1s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 26m 17s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 45s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}196m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | YARN-5106 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12987076/YARN-5106.014.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e39a04daac57 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 46166bd | | maven | version: Apache
[jira] [Commented] (YARN-9052) Replace all MockRM submit method definitions with a builder
[ https://issues.apache.org/jira/browse/YARN-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984591#comment-16984591 ] Hadoop QA commented on YARN-9052: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 88 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 6s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 25s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 29s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 25s{color} | {color:orange} root: The patch generated 42 new + 1829 unchanged - 58 fixed = 1871 total (was 1887) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 26s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 18s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 4s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 26m 7s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 4s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 49s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}228m 57s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | YARN-9052 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12987074/YARN-9052.009.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9a8d33d5fee5 4.15.0-58-generic #64-Ubuntu SMP Tue
[jira] [Updated] (YARN-9938) Validate Parent Queue for QueueMapping contains dynamic group as parent queue
[ https://issues.apache.org/jira/browse/YARN-9938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manikandan R updated YARN-9938: --- Attachment: YARN-9938.005.patch > Validate Parent Queue for QueueMapping contains dynamic group as parent queue > - > > Key: YARN-9938 > URL: https://issues.apache.org/jira/browse/YARN-9938 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Manikandan R >Assignee: Manikandan R >Priority: Major > Attachments: YARN-9938.001.patch, YARN-9938.002.patch, > YARN-9938.003.patch, YARN-9938.004.patch, YARN-9938.005.patch > > > Currently \{{UserGroupMappingPlacementRule#validateParentQueue}} validates > the parent queue using queue path. With dynamic group using %primary_group > and %secondary_group in place (Refer YARN-9841 and YARN-9865) , parent queue > validation should also happen for these above 2 queue mappings after > resolving the above wildcard pattern to corresponding groups at runtime. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9938) Validate Parent Queue for QueueMapping contains dynamic group as parent queue
[ https://issues.apache.org/jira/browse/YARN-9938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984524#comment-16984524 ] Manikandan R commented on YARN-9938: Thanks. Attached .005.patch. > Validate Parent Queue for QueueMapping contains dynamic group as parent queue > - > > Key: YARN-9938 > URL: https://issues.apache.org/jira/browse/YARN-9938 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Manikandan R >Assignee: Manikandan R >Priority: Major > Attachments: YARN-9938.001.patch, YARN-9938.002.patch, > YARN-9938.003.patch, YARN-9938.004.patch > > > Currently \{{UserGroupMappingPlacementRule#validateParentQueue}} validates > the parent queue using queue path. With dynamic group using %primary_group > and %secondary_group in place (Refer YARN-9841 and YARN-9865) , parent queue > validation should also happen for these above 2 queue mappings after > resolving the above wildcard pattern to corresponding groups at runtime. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9923) Introduce HealthReporter interface and implement running Docker daemon checker
[ https://issues.apache.org/jira/browse/YARN-9923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984516#comment-16984516 ] Hadoop QA commented on YARN-9923: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 26 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 6s{color} | {color:green} root generated 0 new + 1868 unchanged - 2 fixed = 1868 total (was 1870) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 9s{color} | {color:orange} root: The patch generated 2 new + 596 unchanged - 52 fixed = 598 total (was 648) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 40s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 29s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 53s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 50s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 45s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} |
[jira] [Commented] (YARN-9970) Refactor TestUserGroupMappingPlacementRule#verifyQueueMapping
[ https://issues.apache.org/jira/browse/YARN-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984508#comment-16984508 ] Peter Bacsko commented on YARN-9970: Wow, checkstyle is complaining like crazy. Well, what can we do? # Just move the lone ");" back to the previous line to get rid of all 'rparen" identation whining. # Small inconsistency: sometimes {{.expectedQueue("default")}} and then {{build()}} is in the next line, sometimes {{.expectedQueue("default").build().}} Let's just move {{build()}} call always to the next line. # If we don't care about visibility stuff in {{QueueMapping}}, just put {{@SuppressWarnings("checkstyle:visibilitymodifier")}}. Alternatively, you can actually modify code as suggested to clear the warning. > Refactor TestUserGroupMappingPlacementRule#verifyQueueMapping > - > > Key: YARN-9970 > URL: https://issues.apache.org/jira/browse/YARN-9970 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Manikandan R >Assignee: Manikandan R >Priority: Major > Attachments: YARN-9970.001.patch, YARN-9970.002.patch, > YARN-9970.003.patch, YARN-9970.004.patch, YARN-9970.005.patch > > > Scope of this Jira is to refactor > TestUserGroupMappingPlacementRule#verifyQueueMapping and QueueMapping class > as discussed in > https://issues.apache.org/jira/browse/YARN-9865?focusedCommentId=16971482=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16971482 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9938) Validate Parent Queue for QueueMapping contains dynamic group as parent queue
[ https://issues.apache.org/jira/browse/YARN-9938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984500#comment-16984500 ] Peter Bacsko commented on YARN-9938: # Please address checkstyle issues (if they're addressabble, not like couple of weeks ago, when we decided to ignore those) # Add a minor comment to the catch clause to indicate that exception is OK {noformat} try { testNestedUserQueueWithDynamicParentQueue(queueMappingsForUG, true, "h"); fail("Leaf Queue 'h' doesn't exists"); } catch (YarnException e) { // expected } try { testNestedUserQueueWithDynamicParentQueue(queueMappingsForUG, true, "a1"); fail("Actual Parent Queue of Leaf Queue 'a1' is 'a', but as per queue " + "mapping it returns primary queue as 'a1group'"); } catch (YarnException e) { // expected } {noformat} > Validate Parent Queue for QueueMapping contains dynamic group as parent queue > - > > Key: YARN-9938 > URL: https://issues.apache.org/jira/browse/YARN-9938 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Manikandan R >Assignee: Manikandan R >Priority: Major > Attachments: YARN-9938.001.patch, YARN-9938.002.patch, > YARN-9938.003.patch, YARN-9938.004.patch > > > Currently \{{UserGroupMappingPlacementRule#validateParentQueue}} validates > the parent queue using queue path. With dynamic group using %primary_group > and %secondary_group in place (Refer YARN-9841 and YARN-9865) , parent queue > validation should also happen for these above 2 queue mappings after > resolving the above wildcard pattern to corresponding groups at runtime. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-9866) u:user2:%primary_group is not working as expected
[ https://issues.apache.org/jira/browse/YARN-9866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984494#comment-16984494 ] Peter Bacsko edited comment on YARN-9866 at 11/28/19 3:29 PM: -- +1 (non-binding) [~snemeth] please review was (Author: pbacsko): +1 (non-binding) > u:user2:%primary_group is not working as expected > - > > Key: YARN-9866 > URL: https://issues.apache.org/jira/browse/YARN-9866 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Manikandan R >Assignee: Manikandan R >Priority: Major > Attachments: YARN-9866.001.patch, YARN-9866.002.patch, > YARN-9866.003.patch > > > Please refer #1 in > https://issues.apache.org/jira/browse/YARN-9841?focusedCommentId=16937024=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16937024 > for more details -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9866) u:user2:%primary_group is not working as expected
[ https://issues.apache.org/jira/browse/YARN-9866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984494#comment-16984494 ] Peter Bacsko commented on YARN-9866: +1 (non-binding) > u:user2:%primary_group is not working as expected > - > > Key: YARN-9866 > URL: https://issues.apache.org/jira/browse/YARN-9866 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Manikandan R >Assignee: Manikandan R >Priority: Major > Attachments: YARN-9866.001.patch, YARN-9866.002.patch, > YARN-9866.003.patch > > > Please refer #1 in > https://issues.apache.org/jira/browse/YARN-9841?focusedCommentId=16937024=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16937024 > for more details -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9969) Improve yarn.scheduler.capacity.queue-mappings documentation
[ https://issues.apache.org/jira/browse/YARN-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984493#comment-16984493 ] Peter Bacsko commented on YARN-9969: [~snemeth] please check if this is good for commit. +1 non-binding from me. > Improve yarn.scheduler.capacity.queue-mappings documentation > > > Key: YARN-9969 > URL: https://issues.apache.org/jira/browse/YARN-9969 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Manikandan R >Assignee: Manikandan R >Priority: Major > Attachments: YARN-9969.001.patch, YARN-9969.002.patch, > YARN-9969.003.patch, YARN-9969.004.patch > > > As discussed in > https://issues.apache.org/jira/browse/YARN-9865?focusedCommentId=16971482=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16971482, > scope of this Jira is to improve the yarn.scheduler.capacity.queue-mappings > in CapacityScheduler.md. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5106) Provide a builder interface for FairScheduler allocations for use in tests
[ https://issues.apache.org/jira/browse/YARN-5106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984483#comment-16984483 ] Adam Antal commented on YARN-5106: -- Uploaded patchset v14 fixing checkstyle issues, and using the new builder for the remained hardcoded allocation files throughout the codebase (due to YARN-9899). > Provide a builder interface for FairScheduler allocations for use in tests > -- > > Key: YARN-5106 > URL: https://issues.apache.org/jira/browse/YARN-5106 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Affects Versions: 2.8.0 >Reporter: Karthik Kambatla >Assignee: Adam Antal >Priority: Major > Labels: newbie++ > Attachments: YARN-5106-branch-3.1.001.patch, > YARN-5106-branch-3.1.001.patch, YARN-5106-branch-3.1.001.patch, > YARN-5106-branch-3.1.002.patch, YARN-5106-branch-3.2.001.patch, > YARN-5106-branch-3.2.001.patch, YARN-5106-branch-3.2.002.patch, > YARN-5106.001.patch, YARN-5106.002.patch, YARN-5106.003.patch, > YARN-5106.004.patch, YARN-5106.005.patch, YARN-5106.006.patch, > YARN-5106.007.patch, YARN-5106.008.patch, YARN-5106.008.patch, > YARN-5106.008.patch, YARN-5106.009.patch, YARN-5106.010.patch, > YARN-5106.011.patch, YARN-5106.012.patch, YARN-5106.013.patch, > YARN-5106.014.patch > > > Most, if not all, fair scheduler tests create an allocations XML file. Having > a helper class that potentially uses a builder would make the tests cleaner. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5106) Provide a builder interface for FairScheduler allocations for use in tests
[ https://issues.apache.org/jira/browse/YARN-5106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal updated YARN-5106: - Attachment: YARN-5106.014.patch > Provide a builder interface for FairScheduler allocations for use in tests > -- > > Key: YARN-5106 > URL: https://issues.apache.org/jira/browse/YARN-5106 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Affects Versions: 2.8.0 >Reporter: Karthik Kambatla >Assignee: Adam Antal >Priority: Major > Labels: newbie++ > Attachments: YARN-5106-branch-3.1.001.patch, > YARN-5106-branch-3.1.001.patch, YARN-5106-branch-3.1.001.patch, > YARN-5106-branch-3.1.002.patch, YARN-5106-branch-3.2.001.patch, > YARN-5106-branch-3.2.001.patch, YARN-5106-branch-3.2.002.patch, > YARN-5106.001.patch, YARN-5106.002.patch, YARN-5106.003.patch, > YARN-5106.004.patch, YARN-5106.005.patch, YARN-5106.006.patch, > YARN-5106.007.patch, YARN-5106.008.patch, YARN-5106.008.patch, > YARN-5106.008.patch, YARN-5106.009.patch, YARN-5106.010.patch, > YARN-5106.011.patch, YARN-5106.012.patch, YARN-5106.013.patch, > YARN-5106.014.patch > > > Most, if not all, fair scheduler tests create an allocations XML file. Having > a helper class that potentially uses a builder would make the tests cleaner. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9052) Replace all MockRM submit method definitions with a builder
[ https://issues.apache.org/jira/browse/YARN-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-9052: - Attachment: YARN-9052.009.patch > Replace all MockRM submit method definitions with a builder > --- > > Key: YARN-9052 > URL: https://issues.apache.org/jira/browse/YARN-9052 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Minor > Attachments: > YARN-9052-004withlogs-patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt, > YARN-9052-testlogs003-justfailed.txt, > YARN-9052-testlogs003-patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt, > YARN-9052-testlogs004-justfailed.txt, YARN-9052.001.patch, > YARN-9052.002.patch, YARN-9052.003.patch, YARN-9052.004.patch, > YARN-9052.004.withlogs.patch, YARN-9052.005.patch, YARN-9052.006.patch, > YARN-9052.007.patch, YARN-9052.008.patch, YARN-9052.009.patch, > YARN-9052.testlogs.002.patch, YARN-9052.testlogs.002.patch, > YARN-9052.testlogs.003.patch, YARN-9052.testlogs.patch > > > MockRM has 31 definitions of submitApp, most of them having more than > acceptable number of parameters, ranging from 2 to even 22 parameters, which > makes the code completely unreadable. > On top of unreadability, it's very hard to follow what RmApp will be produced > for tests as they often pass a lot of empty / null values as parameters. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9052) Replace all MockRM submit method definitions with a builder
[ https://issues.apache.org/jira/browse/YARN-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984454#comment-16984454 ] Szilard Nemeth commented on YARN-9052: -- Hi [~sunilg]! Uploaded patch009. 1. To answer your question about MockRM: This class is still used as it was. I only extracted the code from MockRM that submits applications, so this is why I have MockRMAppSubmitter. As MockRM is a mocked out version of RM (extended from ResourceManager) that is used extensively in almost all the tests, it's pretty hard to replace it with anything more lightweight. Doing that is certainly worth its own jira and this is out of the scope of this jira. 2. Fixed javadoc checkstyle issues. The rest of the checkstyle issues seems a lot but they are either: * HiddenField errors (in MockRMAppSubmissionData.Builder) that we can ignore, I think. * Coming from code that I just moved, like "MethodLength" or "LocalVariableName" issues. What do you think? > Replace all MockRM submit method definitions with a builder > --- > > Key: YARN-9052 > URL: https://issues.apache.org/jira/browse/YARN-9052 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Minor > Attachments: > YARN-9052-004withlogs-patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt, > YARN-9052-testlogs003-justfailed.txt, > YARN-9052-testlogs003-patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt, > YARN-9052-testlogs004-justfailed.txt, YARN-9052.001.patch, > YARN-9052.002.patch, YARN-9052.003.patch, YARN-9052.004.patch, > YARN-9052.004.withlogs.patch, YARN-9052.005.patch, YARN-9052.006.patch, > YARN-9052.007.patch, YARN-9052.008.patch, YARN-9052.009.patch, > YARN-9052.testlogs.002.patch, YARN-9052.testlogs.002.patch, > YARN-9052.testlogs.003.patch, YARN-9052.testlogs.patch > > > MockRM has 31 definitions of submitApp, most of them having more than > acceptable number of parameters, ranging from 2 to even 22 parameters, which > makes the code completely unreadable. > On top of unreadability, it's very hard to follow what RmApp will be produced > for tests as they often pass a lot of empty / null values as parameters. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9923) Introduce HealthReporter interface and implement running Docker daemon checker
[ https://issues.apache.org/jira/browse/YARN-9923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984394#comment-16984394 ] Adam Antal commented on YARN-9923: -- The {{NodeHealthChecker}} instance can be null - thus the NPEs. Fixed in patchset v7 along with the checkstyle issues. > Introduce HealthReporter interface and implement running Docker daemon checker > -- > > Key: YARN-9923 > URL: https://issues.apache.org/jira/browse/YARN-9923 > Project: Hadoop YARN > Issue Type: New Feature > Components: nodemanager, yarn >Affects Versions: 3.2.1 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Major > Attachments: YARN-9923.001.patch, YARN-9923.002.patch, > YARN-9923.003.patch, YARN-9923.004.patch, YARN-9923.005.patch, > YARN-9923.006.patch > > > Currently if a NodeManager is enabled to allocate Docker containers, but the > specified binary (docker.binary in the container-executor.cfg) is missing the > container allocation fails with the following error message: > {noformat} > Container launch fails > Exit code: 29 > Exception message: Launch container failed > Shell error output: sh: : No > such file or directory > Could not inspect docker network to get type /usr/bin/docker network inspect > host --format='{{.Driver}}'. > Error constructing docker command, docker error code=-1, error > message='Unknown error' > {noformat} > I suggest to add a property say "yarn.nodemanager.runtime.linux.docker.check" > to have the following options: > - STARTUP: setting this option the NodeManager would not start if Docker > binaries are missing or the Docker daemon is not running (the exception is > considered FATAL during startup) > - RUNTIME: would give a more detailed/user-friendly exception in > NodeManager's side (NM logs) if Docker binaries are missing or the daemon is > not working. This would also prevent further Docker container allocation as > long as the binaries do not exist and the docker daemon is not running. > - NONE (default): preserving the current behaviour, throwing exception during > container allocation, carrying on using the default retry procedure. > > A new interface called {{HealthChecker}} is introduced which is used in the > {{NodeHealthCheckerService}}. Currently existing implementations like > {{LocalDirsHandlerService}} are modified to implement this giving a clear > abstraction to the node's health. The {{DockerHealthChecker}} implements this > new interface. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9923) Introduce HealthReporter interface and implement running Docker daemon checker
[ https://issues.apache.org/jira/browse/YARN-9923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal updated YARN-9923: - Attachment: YARN-9923.007.patch > Introduce HealthReporter interface and implement running Docker daemon checker > -- > > Key: YARN-9923 > URL: https://issues.apache.org/jira/browse/YARN-9923 > Project: Hadoop YARN > Issue Type: New Feature > Components: nodemanager, yarn >Affects Versions: 3.2.1 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Major > Attachments: YARN-9923.001.patch, YARN-9923.002.patch, > YARN-9923.003.patch, YARN-9923.004.patch, YARN-9923.005.patch, > YARN-9923.006.patch, YARN-9923.007.patch > > > Currently if a NodeManager is enabled to allocate Docker containers, but the > specified binary (docker.binary in the container-executor.cfg) is missing the > container allocation fails with the following error message: > {noformat} > Container launch fails > Exit code: 29 > Exception message: Launch container failed > Shell error output: sh: : No > such file or directory > Could not inspect docker network to get type /usr/bin/docker network inspect > host --format='{{.Driver}}'. > Error constructing docker command, docker error code=-1, error > message='Unknown error' > {noformat} > I suggest to add a property say "yarn.nodemanager.runtime.linux.docker.check" > to have the following options: > - STARTUP: setting this option the NodeManager would not start if Docker > binaries are missing or the Docker daemon is not running (the exception is > considered FATAL during startup) > - RUNTIME: would give a more detailed/user-friendly exception in > NodeManager's side (NM logs) if Docker binaries are missing or the daemon is > not working. This would also prevent further Docker container allocation as > long as the binaries do not exist and the docker daemon is not running. > - NONE (default): preserving the current behaviour, throwing exception during > container allocation, carrying on using the default retry procedure. > > A new interface called {{HealthChecker}} is introduced which is used in the > {{NodeHealthCheckerService}}. Currently existing implementations like > {{LocalDirsHandlerService}} are modified to implement this giving a clear > abstraction to the node's health. The {{DockerHealthChecker}} implements this > new interface. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-10005) Code improvements in MutableCSConfigurationProvider
Szilard Nemeth created YARN-10005: - Summary: Code improvements in MutableCSConfigurationProvider Key: YARN-10005 URL: https://issues.apache.org/jira/browse/YARN-10005 Project: Hadoop YARN Issue Type: Improvement Reporter: Szilard Nemeth Assignee: Szilard Nemeth * Important: constructKeyValueConfUpdate and all related methods seems a separate responsibility: how to convert incoming SchedConfUpdateInfo to Configuration changes (Configuration object) * Duplicated code block (9 lines) in init / formatConfigurationInStore methods * Method "getConfStore" could be package-private -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-10004) Javadoc of YarnConfigurationStore#initialize is not straightforward
Szilard Nemeth created YARN-10004: - Summary: Javadoc of YarnConfigurationStore#initialize is not straightforward Key: YARN-10004 URL: https://issues.apache.org/jira/browse/YARN-10004 Project: Hadoop YARN Issue Type: Improvement Reporter: Szilard Nemeth Assignee: Szilard Nemeth -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-10003) YarnConfigurationStore#checkVersion throws exception that belongs to RMStateStore
Szilard Nemeth created YARN-10003: - Summary: YarnConfigurationStore#checkVersion throws exception that belongs to RMStateStore Key: YARN-10003 URL: https://issues.apache.org/jira/browse/YARN-10003 Project: Hadoop YARN Issue Type: Bug Reporter: Szilard Nemeth Assignee: Szilard Nemeth RMStateVersionIncompatibleException is thrown from method "checkVersion". Moreover, there's a TODO here saying this method is copied from RMStateStore. We should revise this method a bit. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-10002) Code cleanup and improvements ConfigurationStoreBaseTest
Szilard Nemeth created YARN-10002: - Summary: Code cleanup and improvements ConfigurationStoreBaseTest Key: YARN-10002 URL: https://issues.apache.org/jira/browse/YARN-10002 Project: Hadoop YARN Issue Type: Improvement Reporter: Szilard Nemeth Assignee: Szilard Nemeth * Some protected fields could be package-private * Could add a helper method that prepares a simple LogMutation with 1, 2 or 3 updates (Key + value) as this pattern is used extensively in subclasses -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-10001) Add explanation of unimplemented methods in InMemoryConfigurationStore
Szilard Nemeth created YARN-10001: - Summary: Add explanation of unimplemented methods in InMemoryConfigurationStore Key: YARN-10001 URL: https://issues.apache.org/jira/browse/YARN-10001 Project: Hadoop YARN Issue Type: Improvement Reporter: Szilard Nemeth Assignee: Szilard Nemeth -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10000) Code cleanup in FSSchedulerConfigurationStore
[ https://issues.apache.org/jira/browse/YARN-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-1: -- Description: Some things could be improved: * In initialize: PathFilter can be replaced with lambda * initialize is long, could be split into smaller methods * In method 'format': for-loop can be replaced with foreach * There's a variable with a typo: lastestConfigPath * Add explanation of unimplemented methods * Abstract Filesystem operations away more: * Bad logging: Format string is combined with exception logging. {code:java} LOG.info("Failed to write config version at {}", configVersionFile, e); {code} * Interestingly phrased log messages like "write temp capacity configuration fail" "write temp capacity configuration successfully, schedulerConfigFile=" * Method "writeConfigurationToFileSystem" could be private * Any other code quality improvements was: Some things could be improved: * In initialize: PathFilter can be replaced with lambda * initialize is long, could be split into smaller methods * In method 'format': for-loop can be replaced with foreach * There's a variable with a typo: lastestConfigPath * Add explanation of unimplemented methods * Abstract Filesystem operations away more: * Bad logging: Format string is combined with exception logging. * LOG.info("Failed to write config version at {}", configVersionFile, e); * Interestingly phrased log messages like "write temp capacity configuration fail" "write temp capacity configuration successfully, schedulerConfigFile=" * Method "writeConfigurationToFileSystem" could be private * Any other code quality improvements > Code cleanup in FSSchedulerConfigurationStore > - > > Key: YARN-1 > URL: https://issues.apache.org/jira/browse/YARN-1 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Minor > > Some things could be improved: > * In initialize: PathFilter can be replaced with lambda > * initialize is long, could be split into smaller methods > * In method 'format': for-loop can be replaced with foreach > * There's a variable with a typo: lastestConfigPath > * Add explanation of unimplemented methods > * Abstract Filesystem operations away more: > * Bad logging: Format string is combined with exception logging. > {code:java} > LOG.info("Failed to write config version at {}", configVersionFile, e); > {code} > * Interestingly phrased log messages like "write temp capacity configuration > fail" "write temp capacity configuration successfully, schedulerConfigFile=" > * Method "writeConfigurationToFileSystem" could be private > * Any other code quality improvements -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10000) Code cleanup in FSSchedulerConfigurationStore
[ https://issues.apache.org/jira/browse/YARN-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-1: -- Description: Some things could be improved: * In initialize: PathFilter can be replaced with lambda * initialize is long, could be split into smaller methods * In method 'format': for-loop can be replaced with foreach * There's a variable with a typo: lastestConfigPath * Add explanation of unimplemented methods * Abstract Filesystem operations away more: * Bad logging: Format string is combined with exception logging. * LOG.info("Failed to write config version at {}", configVersionFile, e); * Interestingly phrased log messages like "write temp capacity configuration fail" "write temp capacity configuration successfully, schedulerConfigFile=" * Method "writeConfigurationToFileSystem" could be private * Any other code quality improvements > Code cleanup in FSSchedulerConfigurationStore > - > > Key: YARN-1 > URL: https://issues.apache.org/jira/browse/YARN-1 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Minor > > Some things could be improved: > * In initialize: PathFilter can be replaced with lambda > * initialize is long, could be split into smaller methods > * In method 'format': for-loop can be replaced with foreach > * There's a variable with a typo: lastestConfigPath > * Add explanation of unimplemented methods > * Abstract Filesystem operations away more: > * Bad logging: Format string is combined with exception logging. > * LOG.info("Failed to write config version at {}", configVersionFile, e); > * Interestingly phrased log messages like "write temp capacity configuration > fail" "write temp capacity configuration successfully, schedulerConfigFile=" > * Method "writeConfigurationToFileSystem" could be private > * Any other code quality improvements -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10000) Code cleanup in FSSchedulerConfigurationStore
[ https://issues.apache.org/jira/browse/YARN-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984380#comment-16984380 ] Adam Antal commented on YARN-1: --- (y) 10k-th jira in YARN! > Code cleanup in FSSchedulerConfigurationStore > - > > Key: YARN-1 > URL: https://issues.apache.org/jira/browse/YARN-1 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-10000) Code cleanup in FSSchedulerConfigurationStore
Szilard Nemeth created YARN-1: - Summary: Code cleanup in FSSchedulerConfigurationStore Key: YARN-1 URL: https://issues.apache.org/jira/browse/YARN-1 Project: Hadoop YARN Issue Type: Improvement Reporter: Szilard Nemeth Assignee: Szilard Nemeth -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-9999) TestFSSchedulerConfigurationStore: Extend from ConfigurationStoreBaseTest, general code cleanup
Szilard Nemeth created YARN-: Summary: TestFSSchedulerConfigurationStore: Extend from ConfigurationStoreBaseTest, general code cleanup Key: YARN- URL: https://issues.apache.org/jira/browse/YARN- Project: Hadoop YARN Issue Type: Improvement Reporter: Szilard Nemeth Assignee: Szilard Nemeth All config store tests are extended from ConfigurationStoreBaseTest: * TestInMemoryConfigurationStore * TestLeveldbConfigurationStore * TestZKConfigurationStore TestFSSchedulerConfigurationStore should also extend from it. Additionally, some general code cleanup can be applied as well. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-9998) Code cleanup in LeveldbConfigurationStore
Szilard Nemeth created YARN-9998: Summary: Code cleanup in LeveldbConfigurationStore Key: YARN-9998 URL: https://issues.apache.org/jira/browse/YARN-9998 Project: Hadoop YARN Issue Type: Improvement Reporter: Szilard Nemeth Assignee: Szilard Nemeth Many thins can be improved: * znodeParentPath could be a local variable * zkManager could be private, VisibleForTesting annotation is not needed anymore * Do something with unchecked casts * zkManager.safeSetData calls are almost having the same set of parameters: Simplify this * Extract zkManager calls to their own methods: They are repeated * Remove TODOs -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9998) Code cleanup in LeveldbConfigurationStore
[ https://issues.apache.org/jira/browse/YARN-9998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-9998: - Description: Many thins can be improved: * Field compactionTimer could be a local variable * Field versiondb should be camelcase * initDatabase is a very long method: Initialize db / versionDb should be in separate methods, split this method into smaller chunks * Remove TODOs * Remove duplicated code block in LeveldbConfigurationStore.CompactionTimerTask * Any other cleanup was: Many thins can be improved: * znodeParentPath could be a local variable * zkManager could be private, VisibleForTesting annotation is not needed anymore * Do something with unchecked casts * zkManager.safeSetData calls are almost having the same set of parameters: Simplify this * Extract zkManager calls to their own methods: They are repeated * Remove TODOs > Code cleanup in LeveldbConfigurationStore > - > > Key: YARN-9998 > URL: https://issues.apache.org/jira/browse/YARN-9998 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Minor > > Many thins can be improved: > * Field compactionTimer could be a local variable > * Field versiondb should be camelcase > * initDatabase is a very long method: Initialize db / versionDb should be in > separate methods, split this method into smaller chunks > * Remove TODOs > * Remove duplicated code block in > LeveldbConfigurationStore.CompactionTimerTask > * Any other cleanup -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-9997) Code cleanup in ZKConfigurationStore
Szilard Nemeth created YARN-9997: Summary: Code cleanup in ZKConfigurationStore Key: YARN-9997 URL: https://issues.apache.org/jira/browse/YARN-9997 Project: Hadoop YARN Issue Type: Improvement Reporter: Szilard Nemeth Assignee: Szilard Nemeth Many thins can be improved: * znodeParentPath could be a local variable * zkManager could be private, VisibleForTesting annotation is not needed anymore * Do something with unchecked casts * zkManager.safeSetData calls are almost having the same set of parameters: Simplify this * Extract zkManager calls to their own methods: They are repeated * Remove TODOs -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-9996) Code cleanup in QueueAdminConfigurationMutationACLPolicy
Szilard Nemeth created YARN-9996: Summary: Code cleanup in QueueAdminConfigurationMutationACLPolicy Key: YARN-9996 URL: https://issues.apache.org/jira/browse/YARN-9996 Project: Hadoop YARN Issue Type: Improvement Reporter: Szilard Nemeth Assignee: Szilard Nemeth Method 'isMutationAllowed' contains many uses of substring and lastIndexOf. These could be extracted and simplified. Also, some logging could be added as well. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-9995) Code cleanup in TestSchedConfCLI
Szilard Nemeth created YARN-9995: Summary: Code cleanup in TestSchedConfCLI Key: YARN-9995 URL: https://issues.apache.org/jira/browse/YARN-9995 Project: Hadoop YARN Issue Type: Improvement Reporter: Szilard Nemeth Assignee: Szilard Nemeth Some tests are too verbose: - add / delete / remove queues testcases: A Builder for SchedConfUpdateInfo could be created as this object is frequently created. - Some fields can be converted to local variables: sysOutStream, sysOut, sysErr, csConf - Any additional cleanup -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9993) Remove incorrectly committed files from YARN-9011
[ https://issues.apache.org/jira/browse/YARN-9993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-9993: - Hadoop Flags: Reviewed > Remove incorrectly committed files from YARN-9011 > - > > Key: YARN-9993 > URL: https://issues.apache.org/jira/browse/YARN-9993 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 3.2.2 >Reporter: Wilfred Spiegelenburg >Assignee: Wilfred Spiegelenburg >Priority: Major > Fix For: 3.2.2 > > Attachments: YARN-9993-branch-3.2-001.patch > > > With the checkin of YARN-9011 a number of files were added that should not > have been in the commit: > [https://github.com/apache/hadoop/tree/branch-3.2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications] > This causes the asf license check to fail on the 3.2 branch build -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9993) Remove incorrectly committed files from YARN-9011
[ https://issues.apache.org/jira/browse/YARN-9993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984350#comment-16984350 ] Szilard Nemeth commented on YARN-9993: -- Hi [~wilfreds]! Thanks for pointing out this mistake. Pushed the patch onto branch-3.2. Thanks for your contribution! > Remove incorrectly committed files from YARN-9011 > - > > Key: YARN-9993 > URL: https://issues.apache.org/jira/browse/YARN-9993 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 3.2.2 >Reporter: Wilfred Spiegelenburg >Assignee: Wilfred Spiegelenburg >Priority: Major > Attachments: YARN-9993-branch-3.2-001.patch > > > With the checkin of YARN-9011 a number of files were added that should not > have been in the commit: > [https://github.com/apache/hadoop/tree/branch-3.2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications] > This causes the asf license check to fail on the 3.2 branch build -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9892) Capacity scheduler: support DRF ordering policy on queue level
[ https://issues.apache.org/jira/browse/YARN-9892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984262#comment-16984262 ] Peter Bacsko commented on YARN-9892: [~maniraj...@gmail.com] yes, feel free to assign this to yourself. I haven't worked on this at all. > Capacity scheduler: support DRF ordering policy on queue level > -- > > Key: YARN-9892 > URL: https://issues.apache.org/jira/browse/YARN-9892 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > > Capacity scheduler does not support DRF (Dominant Resource Fairness) ordering > policy on queue level. Only "fifo" and "fair" are accepted for > {{yarn.scheduler.capacity..ordering-policy}}. > DRF can only be used globally if > {{yarn.scheduler.capacity.resource-calculator}} is set to > DominantResourceCalculator. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org