[jira] [Commented] (YARN-9768) RM Renew Delegation token thread should timeout and retry
[ https://issues.apache.org/jira/browse/YARN-9768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024883#comment-17024883 ] Hadoop QA commented on YARN-9768: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 40s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 38s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 55s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 28s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 54s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 47s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 86m 10s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}177m 41s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-9768 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12991970/YARN-9768.010.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux f06971e07c10 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Assigned] (YARN-5277) when localizers fail due to resource timestamps being out, provide more diagnostics
[ https://issues.apache.org/jira/browse/YARN-5277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Ahuja reassigned YARN-5277: - Assignee: Siddharth Ahuja > when localizers fail due to resource timestamps being out, provide more > diagnostics > --- > > Key: YARN-5277 > URL: https://issues.apache.org/jira/browse/YARN-5277 > Project: Hadoop YARN > Issue Type: Improvement > Components: nodemanager >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Siddharth Ahuja >Priority: Major > > When an NM fails a resource D/L as the timestamps are wrong, there's not much > info, just two long values. > It would be good to also include the local time values, *and the current wall > time*. These are the things people need to know when trying to work out what > went wrong -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9768) RM Renew Delegation token thread should timeout and retry
[ https://issues.apache.org/jira/browse/YARN-9768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manikandan R updated YARN-9768: --- Attachment: YARN-9768.010.patch > RM Renew Delegation token thread should timeout and retry > - > > Key: YARN-9768 > URL: https://issues.apache.org/jira/browse/YARN-9768 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: CR Hota >Assignee: Manikandan R >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-9768.001.patch, YARN-9768.002.patch, > YARN-9768.003.patch, YARN-9768.004.patch, YARN-9768.005.patch, > YARN-9768.006.patch, YARN-9768.007.patch, YARN-9768.008.patch, > YARN-9768.009.patch, YARN-9768.010.patch > > > Delegation token renewer thread in RM (DelegationTokenRenewer.java) renews > HDFS tokens received to check for validity and expiration time. > This call is made to an underlying HDFS NN or Router Node (which has exact > APIs as HDFS NN). If one of the nodes is bad and the renew call is stuck the > thread remains stuck indefinitely. The thread should ideally timeout the > renewToken and retry from the client's perspective. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9768) RM Renew Delegation token thread should timeout and retry
[ https://issues.apache.org/jira/browse/YARN-9768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024831#comment-17024831 ] Manikandan R commented on YARN-9768: Attaching 0.10 patch.. > RM Renew Delegation token thread should timeout and retry > - > > Key: YARN-9768 > URL: https://issues.apache.org/jira/browse/YARN-9768 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: CR Hota >Assignee: Manikandan R >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-9768.001.patch, YARN-9768.002.patch, > YARN-9768.003.patch, YARN-9768.004.patch, YARN-9768.005.patch, > YARN-9768.006.patch, YARN-9768.007.patch, YARN-9768.008.patch, > YARN-9768.009.patch > > > Delegation token renewer thread in RM (DelegationTokenRenewer.java) renews > HDFS tokens received to check for validity and expiration time. > This call is made to an underlying HDFS NN or Router Node (which has exact > APIs as HDFS NN). If one of the nodes is bad and the renew call is stuck the > thread remains stuck indefinitely. The thread should ideally timeout the > renewToken and retry from the client's perspective. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9743) [JDK11] TestTimelineWebServices.testContextFactory fails
[ https://issues.apache.org/jira/browse/YARN-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024810#comment-17024810 ] Akira Ajisaka commented on YARN-9743: - Hi [~kmarton], how is this issue going? This blocks HADOOP-15338 and I'd like to fix this before releasing Hadoop 3.3.0. > [JDK11] TestTimelineWebServices.testContextFactory fails > > > Key: YARN-9743 > URL: https://issues.apache.org/jira/browse/YARN-9743 > Project: Hadoop YARN > Issue Type: Bug > Components: timelineservice >Affects Versions: 3.2.0 >Reporter: Adam Antal >Assignee: Kinga Marton >Priority: Major > Attachments: YARN-9743.001.patch, YARN-9743.002.patch > > > Tested on OpenJDK 11.0.2 on a Mac. > Stack trace: > {noformat} > [ERROR] Tests run: 29, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: > 36.016 s <<< FAILURE! - in > org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices > [ERROR] > testContextFactory(org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices) > Time elapsed: 1.031 s <<< ERROR! > java.lang.ClassNotFoundException: com.sun.xml.internal.bind.v2.ContextFactory > at > java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583) > at > java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) > at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) > at java.base/java.lang.Class.forName0(Native Method) > at java.base/java.lang.Class.forName(Class.java:315) > at > org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.ContextFactory.newContext(ContextFactory.java:85) > at > org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.ContextFactory.createContext(ContextFactory.java:112) > at > org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices.testContextFactory(TestTimelineWebServices.java:1039) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8990) Fix fair scheduler race condition in app submit and queue cleanup
[ https://issues.apache.org/jira/browse/YARN-8990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024799#comment-17024799 ] Steven Rand commented on YARN-8990: --- How would people feel about cherrypicking this and YARN-8992 to {{branch-3.2}}? It seems like we should do that before {{branch-3.2.2}} gets cut for an eventual 3.2.2 release. > Fix fair scheduler race condition in app submit and queue cleanup > - > > Key: YARN-8990 > URL: https://issues.apache.org/jira/browse/YARN-8990 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 3.2.0 >Reporter: Wilfred Spiegelenburg >Assignee: Wilfred Spiegelenburg >Priority: Blocker > Fix For: 3.2.0, 3.3.0 > > Attachments: YARN-8990.001.patch, YARN-8990.002.patch > > > With the introduction of the dynamic queue deletion in YARN-8191 a race > condition was introduced that can cause a queue to be removed while an > application submit is in progress. > The issue occurs in {{FairScheduler.addApplication()}} when an application is > submitted to a dynamic queue which is empty or the queue does not exist yet. > If during the processing of the application submit the > {{AllocationFileLoaderService}} kicks of for an update the queue clean up > will be run first. The application submit first creates the queue and get a > reference back to the queue. > Other checks are performed and as the last action before getting ready to > generate an AppAttempt the queue is updated to show the submitted application > ID.. > The time between the queue creation and the queue update to show the submit > is long enough for the queue to be removed. The application however is lost > and will never get any resources assigned. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10015) Correct the sample command in SLS README file
[ https://issues.apache.org/jira/browse/YARN-10015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024762#comment-17024762 ] Yufei Gu commented on YARN-10015: - +1, will commit later. > Correct the sample command in SLS README file > - > > Key: YARN-10015 > URL: https://issues.apache.org/jira/browse/YARN-10015 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Trivial > Attachments: YARN-10015.patch > > > The sample command in SLS README {{bin/slsrun.sh > —-input-rumen=sample-data/2jobs2min-rumen-jh.json > —-output-dir=sample-output}} contains a dash from different encoding. The > command will give the following exception. > ERROR: Invalid option —-input-rumen=sample-data/2jobs2min-rumen-jh.json -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10015) Correct the sample command in SLS README file
[ https://issues.apache.org/jira/browse/YARN-10015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024679#comment-17024679 ] Aihua Xu commented on YARN-10015: - [~yufeigu] Can you help review and commit the patch? It's simple doc change. Thanks. > Correct the sample command in SLS README file > - > > Key: YARN-10015 > URL: https://issues.apache.org/jira/browse/YARN-10015 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Trivial > Attachments: YARN-10015.patch > > > The sample command in SLS README {{bin/slsrun.sh > —-input-rumen=sample-data/2jobs2min-rumen-jh.json > —-output-dir=sample-output}} contains a dash from different encoding. The > command will give the following exception. > ERROR: Invalid option —-input-rumen=sample-data/2jobs2min-rumen-jh.json -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10107) Invoking NMWebServices#getNMResourceInfo tries to execute gpu discovery binary even if auto discovery is turned off
[ https://issues.apache.org/jira/browse/YARN-10107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024620#comment-17024620 ] Hadoop QA commented on YARN-10107: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 43s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 48s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 27s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 77m 30s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-10107 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12991938/YARN-10107.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f6d6a86ecb41 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7f40e66 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_232 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25446/testReport/ | | Max. process+thread count | 343 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/25446/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Invoking NMWebServices#getNMResourceInfo tries to execute
[jira] [Commented] (YARN-10107) Invoking NMWebServices#getNMResourceInfo tries to execute gpu discovery binary even if auto discovery is turned off
[ https://issues.apache.org/jira/browse/YARN-10107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024618#comment-17024618 ] Hadoop QA commented on YARN-10107: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 47s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 14s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 78m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-10107 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12991938/YARN-10107.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux cda7e467fece 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7f40e66 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_232 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/25445/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25445/testReport/ | | Max. process+thread count | 307 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U:
[jira] [Commented] (YARN-10101) Support listing of aggregated logs for containers belonging to an application attempt
[ https://issues.apache.org/jira/browse/YARN-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024604#comment-17024604 ] Hadoop QA commented on YARN-10101: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 27s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 3s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 36s{color} | {color:orange} root: The patch generated 21 new + 58 unchanged - 0 fixed = 79 total (was 58) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 10s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 59s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 29s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 52s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 27s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 3m 52s{color} | {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 26m 8s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 42s{color} | {color:green} hadoop-mapreduce-client-hs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 39s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}152m 42s{color} | {color:black} {color} | \\ \\ ||
[jira] [Commented] (YARN-9768) RM Renew Delegation token thread should timeout and retry
[ https://issues.apache.org/jira/browse/YARN-9768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024591#comment-17024591 ] Íñigo Goiri commented on YARN-9768: --- I tried resetting the state but no luck. Can you upload a new one? > RM Renew Delegation token thread should timeout and retry > - > > Key: YARN-9768 > URL: https://issues.apache.org/jira/browse/YARN-9768 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: CR Hota >Assignee: Manikandan R >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-9768.001.patch, YARN-9768.002.patch, > YARN-9768.003.patch, YARN-9768.004.patch, YARN-9768.005.patch, > YARN-9768.006.patch, YARN-9768.007.patch, YARN-9768.008.patch, > YARN-9768.009.patch > > > Delegation token renewer thread in RM (DelegationTokenRenewer.java) renews > HDFS tokens received to check for validity and expiration time. > This call is made to an underlying HDFS NN or Router Node (which has exact > APIs as HDFS NN). If one of the nodes is bad and the renew call is stuck the > thread remains stuck indefinitely. The thread should ideally timeout the > renewToken and retry from the client's perspective. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10107) Invoking NMWebServices#getNMResourceInfo tries to execute gpu discovery binary even if auto discovery is turned off
[ https://issues.apache.org/jira/browse/YARN-10107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-10107: -- Attachment: YARN-10107.001.patch > Invoking NMWebServices#getNMResourceInfo tries to execute gpu discovery > binary even if auto discovery is turned off > --- > > Key: YARN-10107 > URL: https://issues.apache.org/jira/browse/YARN-10107 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Major > Attachments: YARN-10107.001.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10022) Create RM Rest API to validate a CapacityScheduler Configuration
[ https://issues.apache.org/jira/browse/YARN-10022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024518#comment-17024518 ] Hadoop QA commented on YARN-10022: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 30s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 4 new + 112 unchanged - 9 fixed = 116 total (was 121) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 38s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 83m 26s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}136m 24s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-10022 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12991914/YARN-10022.005.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 68ee21805d3a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7f40e66 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_232 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/25443/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25443/testReport/ | | Max. process+thread count | 888 (vs. ulimit of 5500) | | modules | C:
[jira] [Commented] (YARN-10101) Support listing of aggregated logs for containers belonging to an application attempt
[ https://issues.apache.org/jira/browse/YARN-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024501#comment-17024501 ] Adam Antal commented on YARN-10101: --- Uploaded patch v1. Implementation note: - Generalized some functions in {{LogServlet}} to support not only containers but application attempts as inputs. - Created {{WrappedLogMetaRequest}} which wraps the request and translates it to {{ContainerLogRequest}} for {{LogAggregationFileController}} implementations. - (The original method from {{LogWebServiceUtils}} got moved and refactored in {{LogServlet}}) > Support listing of aggregated logs for containers belonging to an application > attempt > - > > Key: YARN-10101 > URL: https://issues.apache.org/jira/browse/YARN-10101 > Project: Hadoop YARN > Issue Type: Sub-task > Components: log-aggregation, yarn >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Major > Attachments: YARN-10101.001.patch > > > To display logs without access to the timeline server, we need an interface > where we can query the list of containers with aggregated logs belonging to > an application attempt. > We should add support for this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10101) Support listing of aggregated logs for containers belonging to an application attempt
[ https://issues.apache.org/jira/browse/YARN-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal updated YARN-10101: -- Attachment: YARN-10101.001.patch > Support listing of aggregated logs for containers belonging to an application > attempt > - > > Key: YARN-10101 > URL: https://issues.apache.org/jira/browse/YARN-10101 > Project: Hadoop YARN > Issue Type: Sub-task > Components: log-aggregation, yarn >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Major > Attachments: YARN-10101.001.patch > > > To display logs without access to the timeline server, we need an interface > where we can query the list of containers with aggregated logs belonging to > an application attempt. > We should add support for this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10104) FS-CS converter: dry run should work without output defined
[ https://issues.apache.org/jira/browse/YARN-10104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024493#comment-17024493 ] Hadoop QA commented on YARN-10104: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 23s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 54s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 85m 32s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}143m 37s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-10104 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12991911/YARN-10104-002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 715771f1b9f4 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7f40e66 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_232 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25442/testReport/ | | Max. process+thread count | 831 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/25442/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > FS-CS converter: dry run should work without
[jira] [Commented] (YARN-10105) FS-CS converter: separator between mapping rules should be comma
[ https://issues.apache.org/jira/browse/YARN-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024481#comment-17024481 ] Hadoop QA commented on YARN-10105: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 42s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 57s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 86m 16s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}145m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-10105 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12991907/YARN-10105-001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux fc372146d894 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7f40e66 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_232 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25441/testReport/ | | Max. process+thread count | 838 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/25441/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > FS-CS converter: separator between mapping rules
[jira] [Comment Edited] (YARN-10107) Invoking NMWebServices#getNMResourceInfo tries to execute gpu discovery binary even if auto discovery is turned off
[ https://issues.apache.org/jira/browse/YARN-10107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024417#comment-17024417 ] Szilard Nemeth edited comment on YARN-10107 at 1/27/20 3:23 PM: During internal end-to-end testing, I found the following issue: Configuration: - GPU is enabled - yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables is set to "/usr/bin/ls" - Any existing valid binary file - yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices is set to "0:0,1:1,2:2", so auto-discovery is turned off. If REST endpoint [http://quasar-tsjqpq-3.vpc.cloudera.com:8042/ws/v1/node/resources/yarn.io%2Fgpu] is called, the following exception is thrown in NM: {code:java} 2020-01-23 07:55:24,803 ERROR org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin: Failed to find GPU discovery executable, please double check yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables setting. org.apache.hadoop.yarn.exceptions.YarnException: Failed to find GPU discovery executable, please double check yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables setting. at org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.NvidiaBinaryHelper.getGpuDeviceInformation(NvidiaBinaryHelper.java:54) at org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuDiscoverer.getGpuDeviceInformation(GpuDiscoverer.java:125) at org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin.getNMResourceInfo(GpuResourcePlugin.java:104) at org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices.getNMResourceInfo(NMWebServices.java:515) {code} *Let's break this down:* 1. org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin#getNMResourceInfo just calls to the {code:java} gpuDeviceInformation = gpuDiscoverer.getGpuDeviceInformation(); {code} 2. In org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuDiscoverer#getGpuDeviceInformation, the following calls to the NvidiaBinaryHelper.getGpuDeviceInformation: {code:java} try { lastDiscoveredGpuInformation = nvidiaBinaryHelper.getGpuDeviceInformation(pathOfGpuBinary); } catch (IOException e) { {code} 3. org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.NvidiaBinaryHelper#getGpuDeviceInformation finally throws the exception. This is only happens in case of the parameter called "pathOfGpuBinary" is null. Since this method is only called from GpuDiscoverer#getGpuDeviceInformation, that passes it's field called "pathOfGpuBinary" as the only one parameter, we can be sure if this field is null, then we have the exception. 4. The only method that can set the "pathOfGpuBinary" fields is with this call chain: {code:java} GpuDiscoverer.lookUpAutoDiscoveryBinary(Configuration) (org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu) GpuDiscoverer.initialize(Configuration, NvidiaBinaryHelper) (org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu) {code} 5. GpuDiscoverer#initialize contains this code: {code:java} if (isAutoDiscoveryEnabled()) { numOfErrorExecutionSinceLastSucceed = 0; lookUpAutoDiscoveryBinary(config); {code} , so org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuDiscoverer#pathOfGpuBinary is set ONLY IF auto discovery is enabled. Since our tests don't have auto discovery enabled, we have this exception. In this sense, the exception message is very misleading for me: {code:java} Failed to find GPU discovery executable, please double check yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables setting. {code} Related jira: https://issues.apache.org/jira/browse/YARN-9337 I think this exception message is very misleading and of course, it does not make any sense at all to try to execute the discovery binary. was (Author: snemeth): During internal end-to-end testing, I found the following issue: Configuration: - GPU is enabled - yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables is set to "/usr/bin/ls" - Any existing valid binary file - yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices is set to "0:0,1:1,2:2", so auto-discovery is turned off. If REST endpoint [http://quasar-tsjqpq-3.vpc.cloudera.com:8042/ws/v1/node/resources/yarn.io%2Fgpu] is called, the following exception is thrown in NM: {code:java} 2020-01-23 07:55:24,803 ERROR org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin: Failed to find GPU discovery executable, please double check yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables setting.
[jira] [Comment Edited] (YARN-10107) Invoking NMWebServices#getNMResourceInfo tries to execute gpu discovery binary even if auto discovery is turned off
[ https://issues.apache.org/jira/browse/YARN-10107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024417#comment-17024417 ] Szilard Nemeth edited comment on YARN-10107 at 1/27/20 3:22 PM: During internal end-to-end testing, I found the following issue: Configuration: - GPU is enabled - yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables is set to "/usr/bin/ls" - Any existing valid binary file - yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices is set to "0:0,1:1,2:2", so auto-discovery is turned off. If REST endpoint [http://quasar-tsjqpq-3.vpc.cloudera.com:8042/ws/v1/node/resources/yarn.io%2Fgpu] is called, the following exception is thrown in NM: {code:java} 2020-01-23 07:55:24,803 ERROR org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin: Failed to find GPU discovery executable, please double check yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables setting. org.apache.hadoop.yarn.exceptions.YarnException: Failed to find GPU discovery executable, please double check yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables setting. at org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.NvidiaBinaryHelper.getGpuDeviceInformation(NvidiaBinaryHelper.java:54) at org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuDiscoverer.getGpuDeviceInformation(GpuDiscoverer.java:125) at org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin.getNMResourceInfo(GpuResourcePlugin.java:104) at org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices.getNMResourceInfo(NMWebServices.java:515) {code} *Let's break this down:* 1. org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin#getNMResourceInfo just calls to the {code:java} gpuDeviceInformation = gpuDiscoverer.getGpuDeviceInformation(); {code} 2. In org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuDiscoverer#getGpuDeviceInformation, the following calls to the NvidiaBinaryHelper.getGpuDeviceInformation: {code:java} try { lastDiscoveredGpuInformation = nvidiaBinaryHelper.getGpuDeviceInformation(pathOfGpuBinary); } catch (IOException e) { {code} 3. org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.NvidiaBinaryHelper#getGpuDeviceInformation finally throws the exception. This is only happens in case of the parameter called "pathOfGpuBinary" is null. Since this method is only called from GpuDiscoverer#getGpuDeviceInformation, that passes it's field called "pathOfGpuBinary" as the only one parameter, we can be sure if this field is null, then we have the exception. 4. The only method that can set the "pathOfGpuBinary" fields is with this call chain: {code:java} GpuDiscoverer.lookUpAutoDiscoveryBinary(Configuration) (org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu) GpuDiscoverer.initialize(Configuration, NvidiaBinaryHelper) (org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu) {code} 5. GpuDiscoverer#initialize contains this code: {code:java} if (isAutoDiscoveryEnabled()) { numOfErrorExecutionSinceLastSucceed = 0; lookUpAutoDiscoveryBinary(config); {code} , so org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuDiscoverer#pathOfGpuBinary is set ONLY IF auto discovery is enabled. Since our tests don't have auto discovery enabled, we have this exception. In this sense, the exception message is very misleading for me: {code:java} Failed to find GPU discovery executable, please double check yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables setting. {code} Related jira: https://issues.apache.org/jira/browse/YARN-9337 was (Author: snemeth): During internal end-to-end testing, I found the following issue: Configuration: - GPU is enabled - yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables is set to "/usr/bin/ls" - Any existing valid binary file - yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices is set to "0:0,1:1,2:2", so auto-discovery is turned off. If REST endpoint http://quasar-tsjqpq-3.vpc.cloudera.com:8042/ws/v1/node/resources/yarn.io%2Fgpu is called, the following exception is thrown in NM: {code:java} 2020-01-23 07:55:24,803 ERROR org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin: Failed to find GPU discovery executable, please double check yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables setting. org.apache.hadoop.yarn.exceptions.YarnException: Failed to find GPU discovery executable, please double check
[jira] [Commented] (YARN-10107) Invoking NMWebServices#getNMResourceInfo tries to execute gpu discovery binary even if auto discovery is turned off
[ https://issues.apache.org/jira/browse/YARN-10107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024417#comment-17024417 ] Szilard Nemeth commented on YARN-10107: --- During internal end-to-end testing, I found the following issue: Configuration: - GPU is enabled - yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables is set to "/usr/bin/ls" - Any existing valid binary file - yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices is set to "0:0,1:1,2:2", so auto-discovery is turned off. If REST endpoint http://quasar-tsjqpq-3.vpc.cloudera.com:8042/ws/v1/node/resources/yarn.io%2Fgpu is called, the following exception is thrown in NM: {code:java} 2020-01-23 07:55:24,803 ERROR org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin: Failed to find GPU discovery executable, please double check yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables setting. org.apache.hadoop.yarn.exceptions.YarnException: Failed to find GPU discovery executable, please double check yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables setting. at org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.NvidiaBinaryHelper.getGpuDeviceInformation(NvidiaBinaryHelper.java:54) at org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuDiscoverer.getGpuDeviceInformation(GpuDiscoverer.java:125) at org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin.getNMResourceInfo(GpuResourcePlugin.java:104) at org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices.getNMResourceInfo(NMWebServices.java:515) {code} Let's break this down: 1. org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin#getNMResourceInfo just calls to the {code:java} gpuDeviceInformation = gpuDiscoverer.getGpuDeviceInformation(); {code} 2. In org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuDiscoverer#getGpuDeviceInformation, the following calls to the NvidiaBinaryHelper.getGpuDeviceInformation: {code:java} try { lastDiscoveredGpuInformation = nvidiaBinaryHelper.getGpuDeviceInformation(pathOfGpuBinary); } catch (IOException e) { {code} 3. org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.NvidiaBinaryHelper#getGpuDeviceInformation finally throws the exception. This is only happens in case of the parameter called "pathOfGpuBinary" is null. Since this method is only called from GpuDiscoverer#getGpuDeviceInformation, that passes it's field called "pathOfGpuBinary" as the only one parameter, we can be sure if this field is null, then we have the exception. 4. The only method that can set the "pathOfGpuBinary" fields is with this call chain: {code:java} GpuDiscoverer.lookUpAutoDiscoveryBinary(Configuration) (org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu) GpuDiscoverer.initialize(Configuration, NvidiaBinaryHelper) (org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu) {code} 5. GpuDiscoverer#initialize contains this code: {code:java} if (isAutoDiscoveryEnabled()) { numOfErrorExecutionSinceLastSucceed = 0; lookUpAutoDiscoveryBinary(config); {code} , so org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuDiscoverer#pathOfGpuBinary is set ONLY IF auto discovery is enabled. Since our tests don't have auto discovery enabled, we have this exception. In this sense, the exception message is very misleading for me: {code:java} Failed to find GPU discovery executable, please double check yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables setting. {code} Related jira: https://issues.apache.org/jira/browse/YARN-9337 > Invoking NMWebServices#getNMResourceInfo tries to execute gpu discovery > binary even if auto discovery is turned off > --- > > Key: YARN-10107 > URL: https://issues.apache.org/jira/browse/YARN-10107 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-10107) Invoking NMWebServices#getNMResourceInfo tries to execute gpu discovery binary even if auto discovery is turned off
Szilard Nemeth created YARN-10107: - Summary: Invoking NMWebServices#getNMResourceInfo tries to execute gpu discovery binary even if auto discovery is turned off Key: YARN-10107 URL: https://issues.apache.org/jira/browse/YARN-10107 Project: Hadoop YARN Issue Type: Bug Reporter: Szilard Nemeth -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-10107) Invoking NMWebServices#getNMResourceInfo tries to execute gpu discovery binary even if auto discovery is turned off
[ https://issues.apache.org/jira/browse/YARN-10107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth reassigned YARN-10107: - Assignee: Szilard Nemeth > Invoking NMWebServices#getNMResourceInfo tries to execute gpu discovery > binary even if auto discovery is turned off > --- > > Key: YARN-10107 > URL: https://issues.apache.org/jira/browse/YARN-10107 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10085) FS-CS converter: remove mixed ordering policy check
[ https://issues.apache.org/jira/browse/YARN-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024411#comment-17024411 ] Adam Antal commented on YARN-10085: --- Provided the tests are passing, and my comments are addressed, +1 (non-binding). > FS-CS converter: remove mixed ordering policy check > --- > > Key: YARN-10085 > URL: https://issues.apache.org/jira/browse/YARN-10085 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Critical > Attachments: YARN-10085-001.patch, YARN-10085-002.patch, > YARN-10085-003.patch, YARN-10085-004.patch, YARN-10085-004.patch, > YARN-10085-005.patch, YARN-10085-006.patch > > > In the converter, this part is very strict and probably unnecessary: > {noformat} > // Validate ordering policy > if (queueConverter.isDrfPolicyUsedOnQueueLevel()) { > if (queueConverter.isFifoOrFairSharePolicyUsed()) { > throw new ConversionException( > "DRF ordering policy cannot be used together with fifo/fair"); > } else { > capacitySchedulerConfig.set( > CapacitySchedulerConfiguration.RESOURCE_CALCULATOR_CLASS, > DominantResourceCalculator.class.getCanonicalName()); > } > } > {noformat} > It's also misleading, because Fair policy can be used under DRF, so the error > message is incorrect. > Let's remove these checks and rewrite the converter in a way that it > generates a valid config even if fair/drf is somehow mixed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10022) Create RM Rest API to validate a CapacityScheduler Configuration
[ https://issues.apache.org/jira/browse/YARN-10022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024405#comment-17024405 ] Kinga Marton commented on YARN-10022: - Thank you [~prabhujoseph] for fixing the issue. I have uploaded a new patch what contains the request test cases as well. [~prabhujoseph], [~snemeth] can you please check the latest patch? > Create RM Rest API to validate a CapacityScheduler Configuration > > > Key: YARN-10022 > URL: https://issues.apache.org/jira/browse/YARN-10022 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Kinga Marton >Assignee: Kinga Marton >Priority: Major > Attachments: YARN-10022.001.patch, YARN-10022.002.patch, > YARN-10022.003.patch, YARN-10022.004.patch, YARN-10022.005.patch, > YARN-10022.WIP.patch, YARN-10022.WIP2.patch > > > RMWebService should expose a new api which gets a CapacityScheduler > Configuration as an input, validates it and returns success / failure. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10022) Create RM Rest API to validate a CapacityScheduler Configuration
[ https://issues.apache.org/jira/browse/YARN-10022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kinga Marton updated YARN-10022: Attachment: YARN-10022.005.patch > Create RM Rest API to validate a CapacityScheduler Configuration > > > Key: YARN-10022 > URL: https://issues.apache.org/jira/browse/YARN-10022 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Kinga Marton >Assignee: Kinga Marton >Priority: Major > Attachments: YARN-10022.001.patch, YARN-10022.002.patch, > YARN-10022.003.patch, YARN-10022.004.patch, YARN-10022.005.patch, > YARN-10022.WIP.patch, YARN-10022.WIP2.patch > > > RMWebService should expose a new api which gets a CapacityScheduler > Configuration as an input, validates it and returns success / failure. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-10106) Yarn logs CLI filtering by application attempt
Adam Antal created YARN-10106: - Summary: Yarn logs CLI filtering by application attempt Key: YARN-10106 URL: https://issues.apache.org/jira/browse/YARN-10106 Project: Hadoop YARN Issue Type: Sub-task Components: yarn Affects Versions: 3.3.0 Reporter: Adam Antal Assignee: Adam Antal {{ContainerLogsRequest}} got a new parameter in YARN-10101, which is the {{applicationAttempt}} - we can use this new parameter in Yarn logs CLI as well to filter by application attempt. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10085) FS-CS converter: remove mixed ordering policy check
[ https://issues.apache.org/jira/browse/YARN-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024375#comment-17024375 ] Peter Bacsko commented on YARN-10085: - [~snemeth] please review this change when you've got some time. > FS-CS converter: remove mixed ordering policy check > --- > > Key: YARN-10085 > URL: https://issues.apache.org/jira/browse/YARN-10085 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Critical > Attachments: YARN-10085-001.patch, YARN-10085-002.patch, > YARN-10085-003.patch, YARN-10085-004.patch, YARN-10085-004.patch, > YARN-10085-005.patch, YARN-10085-006.patch > > > In the converter, this part is very strict and probably unnecessary: > {noformat} > // Validate ordering policy > if (queueConverter.isDrfPolicyUsedOnQueueLevel()) { > if (queueConverter.isFifoOrFairSharePolicyUsed()) { > throw new ConversionException( > "DRF ordering policy cannot be used together with fifo/fair"); > } else { > capacitySchedulerConfig.set( > CapacitySchedulerConfiguration.RESOURCE_CALCULATOR_CLASS, > DominantResourceCalculator.class.getCanonicalName()); > } > } > {noformat} > It's also misleading, because Fair policy can be used under DRF, so the error > message is incorrect. > Let's remove these checks and rewrite the converter in a way that it > generates a valid config even if fair/drf is somehow mixed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10085) FS-CS converter: remove mixed ordering policy check
[ https://issues.apache.org/jira/browse/YARN-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024371#comment-17024371 ] Hadoop QA commented on YARN-10085: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 58s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 39s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 86m 7s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}143m 20s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-10085 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12991891/YARN-10085-006.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 6109e01bebc7 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7f40e66 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_232 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25440/testReport/ | | Max. process+thread count | 830 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/25440/console | |
[jira] [Updated] (YARN-10104) FS-CS converter: dry run should work without output defined
[ https://issues.apache.org/jira/browse/YARN-10104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated YARN-10104: Attachment: YARN-10104-002.patch > FS-CS converter: dry run should work without output defined > --- > > Key: YARN-10104 > URL: https://issues.apache.org/jira/browse/YARN-10104 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Labels: fs2cs > Attachments: YARN-10104-001.patch, YARN-10104-002.patch > > > The "-d" switch doesn't work properly. > You still have to define either "-p" or "-o", which is not the way the tool > is supposed to work (ie. it doesn't need to generate any output after the > conversion). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10105) FS-CS converter: separator between mapping rules should be comma
[ https://issues.apache.org/jira/browse/YARN-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated YARN-10105: Attachment: YARN-10105-001.patch > FS-CS converter: separator between mapping rules should be comma > > > Key: YARN-10105 > URL: https://issues.apache.org/jira/browse/YARN-10105 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: YARN-10105-001.patch > > > A converted configuration throws this error: > {noformat} > 2020-01-27 03:35:35,007 INFO > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned > to standby state > 2020-01-27 03:35:35,008 FATAL > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting > ResourceManager > java.lang.IllegalArgumentException: Illegal queue mapping > u:%user:%user;u:%user:root.users.%user;u:%user:root.default > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getQueueMappings(CapacitySchedulerConfiguration.java:1113) > at > org.apache.hadoop.yarn.server.resourcemanager.placement.UserGroupMappingPlacementRule.initialize(UserGroupMappingPlacementRule.java:244) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.getUserGroupMappingPlacementRule(CapacityScheduler.java:671) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updatePlacementRules(CapacityScheduler.java:712) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initializeQueues(CapacityScheduler.java:753) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initScheduler(CapacityScheduler.java:361) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.serviceInit(CapacityScheduler.java:426) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) > at > org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) > {noformat} > Mapping rules should be separated by a "," character, not by a semicolon. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10103) Capacity scheduler: add support for create=true/false per mapping rule
[ https://issues.apache.org/jira/browse/YARN-10103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated YARN-10103: Labels: fs2cs (was: ) > Capacity scheduler: add support for create=true/false per mapping rule > -- > > Key: YARN-10103 > URL: https://issues.apache.org/jira/browse/YARN-10103 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Priority: Major > Labels: fs2cs > > You can't ask Capacity Scheduler for a mapping to create a queue if it > doesn't exist. > For example, this mapping would use the first rule if the queue exist. If it > doesn't, then it proceeds to the next rule: > {{u:%user:%primary_group.%user:create=false;u:%user%:root.default}} > Let's say user "alice" belongs to the "admins" group. It would first try to > map {{root.admins.alice}}. But, if the queue doesn't exist, then it places > the application into {{root.default}}. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10099) FS-CS converter: handle allow-undeclared-pools and user-as-default-queue properly
[ https://issues.apache.org/jira/browse/YARN-10099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated YARN-10099: Labels: fs2cs (was: ) > FS-CS converter: handle allow-undeclared-pools and user-as-default-queue > properly > - > > Key: YARN-10099 > URL: https://issues.apache.org/jira/browse/YARN-10099 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Labels: fs2cs > Attachments: YARN-10099-001.patch, YARN-10099-002.patch > > > Based on the latest documentation, there are two important properties that > are ignored if we have placement rules: > ||Property||Explanation|| > |yarn.scheduler.fair.allow-undeclared-pools|If this is true, new queues can > be created at application submission time, whether because they are specified > as the application’s queue by the submitter or because they are placed there > by the user-as-default-queue property. If this is false, any time an app > would be placed in a queue that is not specified in the allocations file, it > is placed in the “default” queue instead. Defaults to true. *If a queue > placement policy is given in the allocations file, this property is ignored.*| > |yarn.scheduler.fair.user-as-default-queue|Whether to use the username > associated with the allocation as the default queue name, in the event that a > queue name is not specified. If this is set to “false” or unset, all jobs > have a shared default queue, named “default”. Defaults to true. *If a queue > placement policy is given in the allocations file, this property is ignored.*| > Right now these settings affects the conversion regardless of the placement > rules. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10104) FS-CS converter: dry run should work without output defined
[ https://issues.apache.org/jira/browse/YARN-10104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated YARN-10104: Labels: fs2cs (was: ) > FS-CS converter: dry run should work without output defined > --- > > Key: YARN-10104 > URL: https://issues.apache.org/jira/browse/YARN-10104 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Labels: fs2cs > Attachments: YARN-10104-001.patch > > > The "-d" switch doesn't work properly. > You still have to define either "-p" or "-o", which is not the way the tool > is supposed to work (ie. it doesn't need to generate any output after the > conversion). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-10105) FS-CS converter: separator between mapping rules should be comma
Peter Bacsko created YARN-10105: --- Summary: FS-CS converter: separator between mapping rules should be comma Key: YARN-10105 URL: https://issues.apache.org/jira/browse/YARN-10105 Project: Hadoop YARN Issue Type: Sub-task Reporter: Peter Bacsko Assignee: Peter Bacsko A converted configuration throws this error: {noformat} 2020-01-27 03:35:35,007 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned to standby state 2020-01-27 03:35:35,008 FATAL org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting ResourceManager java.lang.IllegalArgumentException: Illegal queue mapping u:%user:%user;u:%user:root.users.%user;u:%user:root.default at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getQueueMappings(CapacitySchedulerConfiguration.java:1113) at org.apache.hadoop.yarn.server.resourcemanager.placement.UserGroupMappingPlacementRule.initialize(UserGroupMappingPlacementRule.java:244) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.getUserGroupMappingPlacementRule(CapacityScheduler.java:671) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updatePlacementRules(CapacityScheduler.java:712) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initializeQueues(CapacityScheduler.java:753) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initScheduler(CapacityScheduler.java:361) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.serviceInit(CapacityScheduler.java:426) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) {noformat} Mapping rules should be separated by a "," character, not by a semicolon. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10099) FS-CS converter: handle allow-undeclared-pools and user-as-default-queue properly
[ https://issues.apache.org/jira/browse/YARN-10099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated YARN-10099: Summary: FS-CS converter: handle allow-undeclared-pools and user-as-default-queue properly (was: FS-CS converter: handle allow-undeclared-pools and user-as-default queue properly) > FS-CS converter: handle allow-undeclared-pools and user-as-default-queue > properly > - > > Key: YARN-10099 > URL: https://issues.apache.org/jira/browse/YARN-10099 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: YARN-10099-001.patch, YARN-10099-002.patch > > > Based on the latest documentation, there are two important properties that > are ignored if we have placement rules: > ||Property||Explanation|| > |yarn.scheduler.fair.allow-undeclared-pools|If this is true, new queues can > be created at application submission time, whether because they are specified > as the application’s queue by the submitter or because they are placed there > by the user-as-default-queue property. If this is false, any time an app > would be placed in a queue that is not specified in the allocations file, it > is placed in the “default” queue instead. Defaults to true. *If a queue > placement policy is given in the allocations file, this property is ignored.*| > |yarn.scheduler.fair.user-as-default-queue|Whether to use the username > associated with the allocation as the default queue name, in the event that a > queue name is not specified. If this is set to “false” or unset, all jobs > have a shared default queue, named “default”. Defaults to true. *If a queue > placement policy is given in the allocations file, this property is ignored.*| > Right now these settings affects the conversion regardless of the placement > rules. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10085) FS-CS converter: remove mixed ordering policy check
[ https://issues.apache.org/jira/browse/YARN-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024292#comment-17024292 ] Peter Bacsko commented on YARN-10085: - Patch v6 changes: # Print warning in the unsupported case (calculator is DRC but queue policy is just "fair") # Extra unit tests > FS-CS converter: remove mixed ordering policy check > --- > > Key: YARN-10085 > URL: https://issues.apache.org/jira/browse/YARN-10085 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Critical > Attachments: YARN-10085-001.patch, YARN-10085-002.patch, > YARN-10085-003.patch, YARN-10085-004.patch, YARN-10085-004.patch, > YARN-10085-005.patch, YARN-10085-006.patch > > > In the converter, this part is very strict and probably unnecessary: > {noformat} > // Validate ordering policy > if (queueConverter.isDrfPolicyUsedOnQueueLevel()) { > if (queueConverter.isFifoOrFairSharePolicyUsed()) { > throw new ConversionException( > "DRF ordering policy cannot be used together with fifo/fair"); > } else { > capacitySchedulerConfig.set( > CapacitySchedulerConfiguration.RESOURCE_CALCULATOR_CLASS, > DominantResourceCalculator.class.getCanonicalName()); > } > } > {noformat} > It's also misleading, because Fair policy can be used under DRF, so the error > message is incorrect. > Let's remove these checks and rewrite the converter in a way that it > generates a valid config even if fair/drf is somehow mixed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10085) FS-CS converter: remove mixed ordering policy check
[ https://issues.apache.org/jira/browse/YARN-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated YARN-10085: Attachment: YARN-10085-006.patch > FS-CS converter: remove mixed ordering policy check > --- > > Key: YARN-10085 > URL: https://issues.apache.org/jira/browse/YARN-10085 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Critical > Attachments: YARN-10085-001.patch, YARN-10085-002.patch, > YARN-10085-003.patch, YARN-10085-004.patch, YARN-10085-004.patch, > YARN-10085-005.patch, YARN-10085-006.patch > > > In the converter, this part is very strict and probably unnecessary: > {noformat} > // Validate ordering policy > if (queueConverter.isDrfPolicyUsedOnQueueLevel()) { > if (queueConverter.isFifoOrFairSharePolicyUsed()) { > throw new ConversionException( > "DRF ordering policy cannot be used together with fifo/fair"); > } else { > capacitySchedulerConfig.set( > CapacitySchedulerConfiguration.RESOURCE_CALCULATOR_CLASS, > DominantResourceCalculator.class.getCanonicalName()); > } > } > {noformat} > It's also misleading, because Fair policy can be used under DRF, so the error > message is incorrect. > Let's remove these checks and rewrite the converter in a way that it > generates a valid config even if fair/drf is somehow mixed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org