[jira] [Commented] (YARN-9743) [JDK11] TestTimelineWebServices.testContextFactory fails
[ https://issues.apache.org/jira/browse/YARN-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026437#comment-17026437 ] Hudson commented on YARN-9743: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17919 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17919/]) YARN-9743. [JDK11] TestTimelineWebServices.testContextFactory fails. (github: rev a5ef08b619fff296cbe8e987d17ff5caffc703d7) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/ContextFactory.java > [JDK11] TestTimelineWebServices.testContextFactory fails > > > Key: YARN-9743 > URL: https://issues.apache.org/jira/browse/YARN-9743 > Project: Hadoop YARN > Issue Type: Bug > Components: timelineservice >Affects Versions: 3.2.0 >Reporter: Adam Antal >Assignee: Akira Ajisaka >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-9743.001.patch, YARN-9743.002.patch > > > Tested on OpenJDK 11.0.2 on a Mac. > Stack trace: > {noformat} > [ERROR] Tests run: 29, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: > 36.016 s <<< FAILURE! - in > org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices > [ERROR] > testContextFactory(org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices) > Time elapsed: 1.031 s <<< ERROR! > java.lang.ClassNotFoundException: com.sun.xml.internal.bind.v2.ContextFactory > at > java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583) > at > java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) > at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) > at java.base/java.lang.Class.forName0(Native Method) > at java.base/java.lang.Class.forName(Class.java:315) > at > org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.ContextFactory.newContext(ContextFactory.java:85) > at > org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.ContextFactory.createContext(ContextFactory.java:112) > at > org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices.testContextFactory(TestTimelineWebServices.java:1039) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {noformat} -- This message was
[jira] [Commented] (YARN-9624) Use switch case for ProtoUtils#convertFromProtoFormat containerState
[ https://issues.apache.org/jira/browse/YARN-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026427#comment-17026427 ] Ayush Saxena commented on YARN-9624: +1 > Use switch case for ProtoUtils#convertFromProtoFormat containerState > > > Key: YARN-9624 > URL: https://issues.apache.org/jira/browse/YARN-9624 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Bibin Chundatt >Assignee: Bilwa S T >Priority: Major > Labels: performance > Attachments: YARN-9624.001.patch, YARN-9624.002.patch, > YARN-9624.003.patch > > > On large cluster with 100K+ containers on every heartbeat > {{ContainerState.valueOf(e.name().replace(CONTAINER_STATE_PREFIX, ""))}} will > be too costly. Update with switch case. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-10111) In Federation cluster Distributed Shell Application submission fails as YarnClient#getQueueInfo is not implemented
[ https://issues.apache.org/jira/browse/YARN-10111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilwa S T reassigned YARN-10111: Assignee: Bilwa S T > In Federation cluster Distributed Shell Application submission fails as > YarnClient#getQueueInfo is not implemented > -- > > Key: YARN-10111 > URL: https://issues.apache.org/jira/browse/YARN-10111 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Sushanta Sen >Assignee: Bilwa S T >Priority: Blocker > > In Federation cluster Distributed Shell Application submission fails as > YarnClient#getQueueInfo is not implemented. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-10112) Livelock (Runnable FairScheduler.getAppWeight) in Resource Manager when used with Fair Scheduler size based weights enabled
[ https://issues.apache.org/jira/browse/YARN-10112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wilfred Spiegelenburg reassigned YARN-10112: Assignee: Wilfred Spiegelenburg > Livelock (Runnable FairScheduler.getAppWeight) in Resource Manager when used > with Fair Scheduler size based weights enabled > --- > > Key: YARN-10112 > URL: https://issues.apache.org/jira/browse/YARN-10112 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 2.8.5 >Reporter: Yu Wang >Assignee: Wilfred Spiegelenburg >Priority: Minor > > The user uses the FairScheduler, and yarn.scheduler.fair.sizebasedweight is > set true. From the ticket JStack thread dump from the support engineers, we > could see that the method getAppWeight below in the class of FairScheduler > was occupying the FairScheduler object monitor always, which made > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate > always await of entering the same object monitor, thus resulting in the the > livelock. > > The issue occurs very infrequently and we are still unable to figure out a > way to consistently reproduce the issue. The issue resembles to what the Jira > YARN-1458 reports, but it seems that code fix has taken into effect since > 2.6. > > > {code:java} > "ResourceManager Event Processor" #17 prio=5 os_prio=0 tid=0x7fbcee65e800 > nid=0x2ea4 waiting for monitor entry [0x7fbcbcd5e000] > java.lang.Thread.State: BLOCKED (on object monitor) at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:1105) > - waiting to lock <0x0006eb816b18> (a > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1362) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:129) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:801) > at java.lang.Thread.run(Thread.java:748) > "FairSchedulerUpdateThread" #23 daemon prio=5 os_prio=0 > tid=0x7fbceea0e800 nid=0x2ea2 runnable [0x7fbcbcf6] > java.lang.Thread.State: RUNNABLE at java.lang.StrictMath.log1p(Native Method) > at java.lang.Math.log1p(Math.java:1747) at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.getAppWeight(FairScheduler.java:570) > - locked <0x0006eb816b18> (a > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt.getWeights(FSAppAttempt.java:953) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.computeShare(ComputeFairShares.java:192) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.resourceUsedWithWeightToResourceRatio(ComputeFairShares.java:180) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.computeSharesInternal(ComputeFairShares.java:140) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.computeShares(ComputeFairShares.java:51) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.FairSharePolicy.computeShares(FairSharePolicy.java:138) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.recomputeShares(FSLeafQueue.java:235) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.recomputeShares(FSParentQueue.java:89) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.update(FairScheduler.java:365) > - locked <0x0006eb816b18> (a > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$UpdateThread.run(FairScheduler.java:314){code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10112) Livelock (Runnable FairScheduler.getAppWeight) in Resource Manager when used with Fair Scheduler size based weights enabled
[ https://issues.apache.org/jira/browse/YARN-10112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026400#comment-17026400 ] Wilfred Spiegelenburg commented on YARN-10112: -- This does not happen in the current releases of YARN anymore. In YARN-7414 we moved the {{getAppWeight}} out of the scheduler into the {{FSAppAttempt}}. That did not solve the locking issue but was the right thing to do. In the follow up YARN-7513 I removed the lock from the new call. I would say that this is thus a duplicate of the combination YARN-7414 & YARN-7513. Both are fixed in hadoop 3.01 and 3.1. Backporting of this change is possible. > Livelock (Runnable FairScheduler.getAppWeight) in Resource Manager when used > with Fair Scheduler size based weights enabled > --- > > Key: YARN-10112 > URL: https://issues.apache.org/jira/browse/YARN-10112 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 2.8.5 >Reporter: Yu Wang >Priority: Minor > > The user uses the FairScheduler, and yarn.scheduler.fair.sizebasedweight is > set true. From the ticket JStack thread dump from the support engineers, we > could see that the method getAppWeight below in the class of FairScheduler > was occupying the FairScheduler object monitor always, which made > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate > always await of entering the same object monitor, thus resulting in the the > livelock. > > The issue occurs very infrequently and we are still unable to figure out a > way to consistently reproduce the issue. The issue resembles to what the Jira > YARN-1458 reports, but it seems that code fix has taken into effect since > 2.6. > > > {code:java} > "ResourceManager Event Processor" #17 prio=5 os_prio=0 tid=0x7fbcee65e800 > nid=0x2ea4 waiting for monitor entry [0x7fbcbcd5e000] > java.lang.Thread.State: BLOCKED (on object monitor) at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:1105) > - waiting to lock <0x0006eb816b18> (a > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1362) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:129) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:801) > at java.lang.Thread.run(Thread.java:748) > "FairSchedulerUpdateThread" #23 daemon prio=5 os_prio=0 > tid=0x7fbceea0e800 nid=0x2ea2 runnable [0x7fbcbcf6] > java.lang.Thread.State: RUNNABLE at java.lang.StrictMath.log1p(Native Method) > at java.lang.Math.log1p(Math.java:1747) at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.getAppWeight(FairScheduler.java:570) > - locked <0x0006eb816b18> (a > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt.getWeights(FSAppAttempt.java:953) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.computeShare(ComputeFairShares.java:192) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.resourceUsedWithWeightToResourceRatio(ComputeFairShares.java:180) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.computeSharesInternal(ComputeFairShares.java:140) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.computeShares(ComputeFairShares.java:51) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.FairSharePolicy.computeShares(FairSharePolicy.java:138) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.recomputeShares(FSLeafQueue.java:235) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.recomputeShares(FSParentQueue.java:89) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.update(FairScheduler.java:365) > - locked <0x0006eb816b18> (a > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$UpdateThread.run(FairScheduler.java:314){code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For
[jira] [Commented] (YARN-10084) Allow inheritance of max app lifetime / default app lifetime
[ https://issues.apache.org/jira/browse/YARN-10084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026360#comment-17026360 ] Hadoop QA commented on YARN-10084: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.10 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 54s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s{color} | {color:green} branch-2.10 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 49s{color} | {color:green} branch-2.10 passed with JDK v1.8.0_242 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} branch-2.10 passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 14s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} branch-2.10 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} branch-2.10 passed with JDK v1.8.0_242 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 15s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 48s{color} | {color:green} the patch passed with JDK v1.8.0_242 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 0 new + 140 unchanged - 1 fixed = 140 total (was 141) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed with JDK v1.8.0_242 {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 66m 30s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings.
[jira] [Commented] (YARN-10084) Allow inheritance of max app lifetime / default app lifetime
[ https://issues.apache.org/jira/browse/YARN-10084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026289#comment-17026289 ] Eric Badger commented on YARN-10084: +1 on the branch-3.1 patch. I just committed that to branch-3.1. > Allow inheritance of max app lifetime / default app lifetime > > > Key: YARN-10084 > URL: https://issues.apache.org/jira/browse/YARN-10084 > Project: Hadoop YARN > Issue Type: Improvement > Components: capacity scheduler >Affects Versions: 2.10.0, 3.2.1, 3.1.3 >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Major > Fix For: 3.3.0, 3.2.2, 3.1.4 > > Attachments: YARN-10084.001.patch, YARN-10084.002.patch, > YARN-10084.003.patch, YARN-10084.004.patch, YARN-10084.005.patch, > YARN-10084.006.patch, YARN-10084.branch-2.10.005.patch, > YARN-10084.branch-2.10.006.patch, YARN-10084.branch-3.1.005.patch, > YARN-10084.branch-3.1.006.patch, YARN-10084.branch-3.2.005.patch, > YARN-10084.branch-3.2.006.patch > > > Currently, {{maximum-application-lifetime}} and > {{default-application-lifetime}} must be set for each leaf queue. If it is > not set for a particular leaf queue, then there will be no time limit on apps > running in that queue. It should be possible to set > {{yarn.scheduler.capacity.root.maximum-application-lifetime}} for the root > queue and allow child queues to override that value if desired. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10084) Allow inheritance of max app lifetime / default app lifetime
[ https://issues.apache.org/jira/browse/YARN-10084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Payne updated YARN-10084: -- Attachment: YARN-10084.branch-2.10.006.patch > Allow inheritance of max app lifetime / default app lifetime > > > Key: YARN-10084 > URL: https://issues.apache.org/jira/browse/YARN-10084 > Project: Hadoop YARN > Issue Type: Improvement > Components: capacity scheduler >Affects Versions: 2.10.0, 3.2.1, 3.1.3 >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Major > Fix For: 3.3.0, 3.2.2, 3.1.4 > > Attachments: YARN-10084.001.patch, YARN-10084.002.patch, > YARN-10084.003.patch, YARN-10084.004.patch, YARN-10084.005.patch, > YARN-10084.006.patch, YARN-10084.branch-2.10.005.patch, > YARN-10084.branch-2.10.006.patch, YARN-10084.branch-3.1.005.patch, > YARN-10084.branch-3.1.006.patch, YARN-10084.branch-3.2.005.patch, > YARN-10084.branch-3.2.006.patch > > > Currently, {{maximum-application-lifetime}} and > {{default-application-lifetime}} must be set for each leaf queue. If it is > not set for a particular leaf queue, then there will be no time limit on apps > running in that queue. It should be possible to set > {{yarn.scheduler.capacity.root.maximum-application-lifetime}} for the root > queue and allow child queues to override that value if desired. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10084) Allow inheritance of max app lifetime / default app lifetime
[ https://issues.apache.org/jira/browse/YARN-10084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated YARN-10084: --- Fix Version/s: 3.1.4 > Allow inheritance of max app lifetime / default app lifetime > > > Key: YARN-10084 > URL: https://issues.apache.org/jira/browse/YARN-10084 > Project: Hadoop YARN > Issue Type: Improvement > Components: capacity scheduler >Affects Versions: 2.10.0, 3.2.1, 3.1.3 >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Major > Fix For: 3.3.0, 3.2.2, 3.1.4 > > Attachments: YARN-10084.001.patch, YARN-10084.002.patch, > YARN-10084.003.patch, YARN-10084.004.patch, YARN-10084.005.patch, > YARN-10084.006.patch, YARN-10084.branch-2.10.005.patch, > YARN-10084.branch-3.1.005.patch, YARN-10084.branch-3.1.006.patch, > YARN-10084.branch-3.2.005.patch, YARN-10084.branch-3.2.006.patch > > > Currently, {{maximum-application-lifetime}} and > {{default-application-lifetime}} must be set for each leaf queue. If it is > not set for a particular leaf queue, then there will be no time limit on apps > running in that queue. It should be possible to set > {{yarn.scheduler.capacity.root.maximum-application-lifetime}} for the root > queue and allow child queues to override that value if desired. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10109) Allow stop and convert from leaf to parent queue in a single Mutation API call
[ https://issues.apache.org/jira/browse/YARN-10109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026281#comment-17026281 ] Hadoop QA commented on YARN-10109: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 30m 56s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 5m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 4s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 39s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 53s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 83m 49s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}211m 37s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-10109 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12992159/YARN-10109-003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 05ae14b8b8e7 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 799d4c1 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25468/testReport/ | | Max. process+thread count | 830 (vs. ulimit of 5500) | |
[jira] [Commented] (YARN-10084) Allow inheritance of max app lifetime / default app lifetime
[ https://issues.apache.org/jira/browse/YARN-10084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026280#comment-17026280 ] Hadoop QA commented on YARN-10084: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-3.1 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 36s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 35s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 6s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 18s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} branch-3.1 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 42s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 71m 23s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}167m 40s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:70a0ef5d4a6 | | JIRA Issue | YARN-10084 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12992141/YARN-10084.branch-3.1.006.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 37deca22241d
[jira] [Created] (YARN-10112) Livelock (Runnable FairScheduler.getAppWeight) in Resource Manager when used with Fair Scheduler size based weights enabled
Yu Wang created YARN-10112: -- Summary: Livelock (Runnable FairScheduler.getAppWeight) in Resource Manager when used with Fair Scheduler size based weights enabled Key: YARN-10112 URL: https://issues.apache.org/jira/browse/YARN-10112 Project: Hadoop YARN Issue Type: Bug Components: fairscheduler Affects Versions: 2.8.5 Reporter: Yu Wang The user uses the FairScheduler, and yarn.scheduler.fair.sizebasedweight is set true. From the ticket JStack thread dump from the support engineers, we could see that the method getAppWeight below in the class of FairScheduler was occupying the FairScheduler object monitor always, which made org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate always await of entering the same object monitor, thus resulting in the the livelock. The issue occurs very infrequently and we are still unable to figure out a way to consistently reproduce the issue. The issue resembles to what the Jira YARN-1458 reports, but it seems that code fix has taken into effect since 2.6. {code:java} "ResourceManager Event Processor" #17 prio=5 os_prio=0 tid=0x7fbcee65e800 nid=0x2ea4 waiting for monitor entry [0x7fbcbcd5e000] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:1105) - waiting to lock <0x0006eb816b18> (a org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1362) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:129) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:801) at java.lang.Thread.run(Thread.java:748) "FairSchedulerUpdateThread" #23 daemon prio=5 os_prio=0 tid=0x7fbceea0e800 nid=0x2ea2 runnable [0x7fbcbcf6] java.lang.Thread.State: RUNNABLE at java.lang.StrictMath.log1p(Native Method) at java.lang.Math.log1p(Math.java:1747) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.getAppWeight(FairScheduler.java:570) - locked <0x0006eb816b18> (a org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt.getWeights(FSAppAttempt.java:953) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.computeShare(ComputeFairShares.java:192) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.resourceUsedWithWeightToResourceRatio(ComputeFairShares.java:180) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.computeSharesInternal(ComputeFairShares.java:140) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.computeShares(ComputeFairShares.java:51) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.FairSharePolicy.computeShares(FairSharePolicy.java:138) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.recomputeShares(FSLeafQueue.java:235) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.recomputeShares(FSParentQueue.java:89) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.update(FairScheduler.java:365) - locked <0x0006eb816b18> (a org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$UpdateThread.run(FairScheduler.java:314){code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9743) [JDK11] TestTimelineWebServices.testContextFactory fails
[ https://issues.apache.org/jira/browse/YARN-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026223#comment-17026223 ] Nick Dimiduk commented on YARN-9743: +1 (non-binding) > [JDK11] TestTimelineWebServices.testContextFactory fails > > > Key: YARN-9743 > URL: https://issues.apache.org/jira/browse/YARN-9743 > Project: Hadoop YARN > Issue Type: Bug > Components: timelineservice >Affects Versions: 3.2.0 >Reporter: Adam Antal >Assignee: Akira Ajisaka >Priority: Major > Attachments: YARN-9743.001.patch, YARN-9743.002.patch > > > Tested on OpenJDK 11.0.2 on a Mac. > Stack trace: > {noformat} > [ERROR] Tests run: 29, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: > 36.016 s <<< FAILURE! - in > org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices > [ERROR] > testContextFactory(org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices) > Time elapsed: 1.031 s <<< ERROR! > java.lang.ClassNotFoundException: com.sun.xml.internal.bind.v2.ContextFactory > at > java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583) > at > java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) > at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) > at java.base/java.lang.Class.forName0(Native Method) > at java.base/java.lang.Class.forName(Class.java:315) > at > org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.ContextFactory.newContext(ContextFactory.java:85) > at > org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.ContextFactory.createContext(ContextFactory.java:112) > at > org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices.testContextFactory(TestTimelineWebServices.java:1039) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10109) Allow stop and convert from leaf to parent queue in a single Mutation API call
[ https://issues.apache.org/jira/browse/YARN-10109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026210#comment-17026210 ] Hadoop QA commented on YARN-10109: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 5s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 31s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 12s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}160m 25s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-10109 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12992056/YARN-10109-002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 31920a2e8818 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 799d4c1 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_232 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/25467/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25467/testReport/ | | Max. process+thread count | 830 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U:
[jira] [Commented] (YARN-8982) [Router] Add locality policy
[ https://issues.apache.org/jira/browse/YARN-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026179#comment-17026179 ] Hadoop QA commented on YARN-8982: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 14s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 56s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 29s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 58m 54s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-8982 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12992163/YARN-8982.v4.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ae52a0b9dfc3 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 799d4c1 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_232 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/25469/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25469/testReport/ | | Max. process+thread count | 311 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U:
[jira] [Updated] (YARN-8982) [Router] Add locality policy
[ https://issues.apache.org/jira/browse/YARN-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Young Chen updated YARN-8982: - Attachment: YARN-8982.v4.patch > [Router] Add locality policy > - > > Key: YARN-8982 > URL: https://issues.apache.org/jira/browse/YARN-8982 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Young Chen >Priority: Major > Attachments: YARN-8982.v1.patch, YARN-8982.v2.patch, > YARN-8982.v3.patch, YARN-8982.v4.patch > > > This jira tracks the effort to add a new policy in the Router. > This policy will allow the Router to pick the SubCluster based on the node > that the client requested. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10084) Allow inheritance of max app lifetime / default app lifetime
[ https://issues.apache.org/jira/browse/YARN-10084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026121#comment-17026121 ] Hadoop QA commented on YARN-10084: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 30s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-3.1 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 35s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 54s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 55s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 5s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} branch-3.1 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 38s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 18s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 19s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}180m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:70a0ef5d4a6 | | JIRA Issue | YARN-10084 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12992141/YARN-10084.branch-3.1.006.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4b93bdddb6af 4.15.0-74-generic
[jira] [Updated] (YARN-10109) Allow stop and convert from leaf to parent queue in a single Mutation API call
[ https://issues.apache.org/jira/browse/YARN-10109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prabhu Joseph updated YARN-10109: - Attachment: YARN-10109-003.patch > Allow stop and convert from leaf to parent queue in a single Mutation API call > -- > > Key: YARN-10109 > URL: https://issues.apache.org/jira/browse/YARN-10109 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: YARN-10109-001.patch, YARN-10109-002.patch, > YARN-10109-003.patch > > > SchedulerConf Mutation API does not Allow Stop and Adding queue under an > existing Leaf Queue in a single call. > *Repro:* > > {code:java} > Capacity-Scheduler.xml: > yarn.scheduler.capacity.root.queues = default > yarn.scheduler.capacity.root.default.capacity = 100 > cat abc.xml > > > root.default.v1 > > > capacity > 100 > > > > > root.default > > > state > STOPPED > > > > > [yarn@pjoseph-1 tmp]$ curl --negotiate -u : -X PUT -d @add.xml -H > "Content-type: application/xml" > 'http://:8088/ws/v1/cluster/scheduler-conf?user.name=yarn' > Failed to re-init queues : Can not convert the leaf queue: root.default to > parent queue since it is not yet in stopped state. Current State : RUNNING > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10109) Allow stop and convert from leaf to parent queue in a single Mutation API call
[ https://issues.apache.org/jira/browse/YARN-10109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prabhu Joseph updated YARN-10109: - Attachment: (was: YARN-10109-003.patch) > Allow stop and convert from leaf to parent queue in a single Mutation API call > -- > > Key: YARN-10109 > URL: https://issues.apache.org/jira/browse/YARN-10109 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: YARN-10109-001.patch, YARN-10109-002.patch > > > SchedulerConf Mutation API does not Allow Stop and Adding queue under an > existing Leaf Queue in a single call. > *Repro:* > > {code:java} > Capacity-Scheduler.xml: > yarn.scheduler.capacity.root.queues = default > yarn.scheduler.capacity.root.default.capacity = 100 > cat abc.xml > > > root.default.v1 > > > capacity > 100 > > > > > root.default > > > state > STOPPED > > > > > [yarn@pjoseph-1 tmp]$ curl --negotiate -u : -X PUT -d @add.xml -H > "Content-type: application/xml" > 'http://:8088/ws/v1/cluster/scheduler-conf?user.name=yarn' > Failed to re-init queues : Can not convert the leaf queue: root.default to > parent queue since it is not yet in stopped state. Current State : RUNNING > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10109) Allow stop and convert from leaf to parent queue in a single Mutation API call
[ https://issues.apache.org/jira/browse/YARN-10109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prabhu Joseph updated YARN-10109: - Attachment: YARN-10109-003.patch > Allow stop and convert from leaf to parent queue in a single Mutation API call > -- > > Key: YARN-10109 > URL: https://issues.apache.org/jira/browse/YARN-10109 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: YARN-10109-001.patch, YARN-10109-002.patch, > YARN-10109-003.patch > > > SchedulerConf Mutation API does not Allow Stop and Adding queue under an > existing Leaf Queue in a single call. > *Repro:* > > {code:java} > Capacity-Scheduler.xml: > yarn.scheduler.capacity.root.queues = default > yarn.scheduler.capacity.root.default.capacity = 100 > cat abc.xml > > > root.default.v1 > > > capacity > 100 > > > > > root.default > > > state > STOPPED > > > > > [yarn@pjoseph-1 tmp]$ curl --negotiate -u : -X PUT -d @add.xml -H > "Content-type: application/xml" > 'http://:8088/ws/v1/cluster/scheduler-conf?user.name=yarn' > Failed to re-init queues : Can not convert the leaf queue: root.default to > parent queue since it is not yet in stopped state. Current State : RUNNING > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10099) FS-CS converter: handle allow-undeclared-pools and user-as-default-queue properly and fix misc issues
[ https://issues.apache.org/jira/browse/YARN-10099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026044#comment-17026044 ] Hadoop QA commented on YARN-10099: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 38s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 25s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 44s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 4s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}148m 26s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.fair.converter.TestFSConfigToCSConfigConverter | | | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-10099 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12992134/YARN-10099-007.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux b66627c8fe14 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7f3e1e0 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_232 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/25465/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results |
[jira] [Commented] (YARN-10099) FS-CS converter: handle allow-undeclared-pools and user-as-default-queue properly and fix misc issues
[ https://issues.apache.org/jira/browse/YARN-10099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025979#comment-17025979 ] Hadoop QA commented on YARN-10099: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 32s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 23s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}162m 37s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.fair.converter.TestFSConfigToCSConfigConverter | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-10099 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12992123/YARN-10099-006.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 731181957a97 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 825db8f | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_232 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/25464/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25464/testReport/ | | Max. process+thread
[jira] [Updated] (YARN-10084) Allow inheritance of max app lifetime / default app lifetime
[ https://issues.apache.org/jira/browse/YARN-10084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Payne updated YARN-10084: -- Attachment: YARN-10084.branch-3.1.006.patch > Allow inheritance of max app lifetime / default app lifetime > > > Key: YARN-10084 > URL: https://issues.apache.org/jira/browse/YARN-10084 > Project: Hadoop YARN > Issue Type: Improvement > Components: capacity scheduler >Affects Versions: 2.10.0, 3.2.1, 3.1.3 >Reporter: Eric Payne >Assignee: Eric Payne >Priority: Major > Fix For: 3.3.0, 3.2.2 > > Attachments: YARN-10084.001.patch, YARN-10084.002.patch, > YARN-10084.003.patch, YARN-10084.004.patch, YARN-10084.005.patch, > YARN-10084.006.patch, YARN-10084.branch-2.10.005.patch, > YARN-10084.branch-3.1.005.patch, YARN-10084.branch-3.1.006.patch, > YARN-10084.branch-3.2.005.patch, YARN-10084.branch-3.2.006.patch > > > Currently, {{maximum-application-lifetime}} and > {{default-application-lifetime}} must be set for each leaf queue. If it is > not set for a particular leaf queue, then there will be no time limit on apps > running in that queue. It should be possible to set > {{yarn.scheduler.capacity.root.maximum-application-lifetime}} for the root > queue and allow child queues to override that value if desired. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10099) FS-CS converter: handle allow-undeclared-pools and user-as-default-queue properly and fix misc issues
[ https://issues.apache.org/jira/browse/YARN-10099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated YARN-10099: Attachment: YARN-10099-007.patch > FS-CS converter: handle allow-undeclared-pools and user-as-default-queue > properly and fix misc issues > - > > Key: YARN-10099 > URL: https://issues.apache.org/jira/browse/YARN-10099 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Labels: fs2cs > Attachments: YARN-10099-001.patch, YARN-10099-002.patch, > YARN-10099-003.patch, YARN-10099-004.patch, YARN-10099-005.patch, > YARN-10099-006.patch, YARN-10099-007.patch > > > This ticket is intended to fix two issues: > 1. Based on the latest documentation, there are two important properties that > are ignored if we have placement rules: > ||Property||Explanation|| > |yarn.scheduler.fair.allow-undeclared-pools|If this is true, new queues can > be created at application submission time, whether because they are specified > as the application’s queue by the submitter or because they are placed there > by the user-as-default-queue property. If this is false, any time an app > would be placed in a queue that is not specified in the allocations file, it > is placed in the “default” queue instead. Defaults to true. *If a queue > placement policy is given in the allocations file, this property is ignored.*| > |yarn.scheduler.fair.user-as-default-queue|Whether to use the username > associated with the allocation as the default queue name, in the event that a > queue name is not specified. If this is set to “false” or unset, all jobs > have a shared default queue, named “default”. Defaults to true. *If a queue > placement policy is given in the allocations file, this property is ignored.*| > Right now these settings affects the conversion regardless of the placement > rules. > 2. A converted configuration throws this error: > {noformat} > 2020-01-27 03:35:35,007 INFO > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned > to standby state > 2020-01-27 03:35:35,008 FATAL > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting > ResourceManager > java.lang.IllegalArgumentException: Illegal queue mapping > u:%user:%user;u:%user:root.users.%user;u:%user:root.default > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getQueueMappings(CapacitySchedulerConfiguration.java:1113) > at > org.apache.hadoop.yarn.server.resourcemanager.placement.UserGroupMappingPlacementRule.initialize(UserGroupMappingPlacementRule.java:244) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.getUserGroupMappingPlacementRule(CapacityScheduler.java:671) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updatePlacementRules(CapacityScheduler.java:712) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initializeQueues(CapacityScheduler.java:753) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initScheduler(CapacityScheduler.java:361) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.serviceInit(CapacityScheduler.java:426) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) > at > org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) > {noformat} > Mapping rules should be separated by a "," character, not by a semicolon. > 3. When initializing FS for conversion, we add the current {{yarn-site.xml}} > as resource. This is not necessary. This can cause problems like: > {noformat} > [...] > 20/01/29 02:45:38 ERROR config.RangerConfiguration: > addResourceIfReadable(ranger-yarn-audit.xml): couldn't find resource file > location > 20/01/29 02:45:38 ERROR config.RangerConfiguration: > addResourceIfReadable(ranger-yarn-security.xml): couldn't find resource file > location > 20/01/29 02:45:38 ERROR config.RangerConfiguration: > addResourceIfReadable(ranger-yarn-policymgr-ssl.xml): couldn't find resource > file location > 20/01/29 02:45:38 ERROR conf.Configuration: error parsing conf > file:/etc/hadoop/conf.cloudera.YARN-1/xasecure-audit.xml > java.io.FileNotFoundException: > /etc/hadoop/conf.cloudera.YARN-1/xasecure-audit.xml (No such file or > directory) > at java.base/java.io.FileInputStream.open0(Native Method) > at java.base/java.io.FileInputStream.open(FileInputStream.java:219) > at java.base/java.io.FileInputStream.(FileInputStream.java:157) >
[jira] [Comment Edited] (YARN-9743) [JDK11] TestTimelineWebServices.testContextFactory fails
[ https://issues.apache.org/jira/browse/YARN-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025907#comment-17025907 ] Kinga Marton edited comment on YARN-9743 at 1/29/20 2:17 PM: - [~aajisaka] I have checked and tested your PR and it LGTM, +1 (non-binding). Thank you! was (Author: kmarton): [~aajisaka] I have checked and tested your PR and it LGTM, +1 (non-binding) > [JDK11] TestTimelineWebServices.testContextFactory fails > > > Key: YARN-9743 > URL: https://issues.apache.org/jira/browse/YARN-9743 > Project: Hadoop YARN > Issue Type: Bug > Components: timelineservice >Affects Versions: 3.2.0 >Reporter: Adam Antal >Assignee: Akira Ajisaka >Priority: Major > Attachments: YARN-9743.001.patch, YARN-9743.002.patch > > > Tested on OpenJDK 11.0.2 on a Mac. > Stack trace: > {noformat} > [ERROR] Tests run: 29, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: > 36.016 s <<< FAILURE! - in > org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices > [ERROR] > testContextFactory(org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices) > Time elapsed: 1.031 s <<< ERROR! > java.lang.ClassNotFoundException: com.sun.xml.internal.bind.v2.ContextFactory > at > java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583) > at > java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) > at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) > at java.base/java.lang.Class.forName0(Native Method) > at java.base/java.lang.Class.forName(Class.java:315) > at > org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.ContextFactory.newContext(ContextFactory.java:85) > at > org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.ContextFactory.createContext(ContextFactory.java:112) > at > org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices.testContextFactory(TestTimelineWebServices.java:1039) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail:
[jira] [Commented] (YARN-9743) [JDK11] TestTimelineWebServices.testContextFactory fails
[ https://issues.apache.org/jira/browse/YARN-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025907#comment-17025907 ] Kinga Marton commented on YARN-9743: [~aajisaka] I have checked and tested your PR and it LGTM, +1 (non-binding) > [JDK11] TestTimelineWebServices.testContextFactory fails > > > Key: YARN-9743 > URL: https://issues.apache.org/jira/browse/YARN-9743 > Project: Hadoop YARN > Issue Type: Bug > Components: timelineservice >Affects Versions: 3.2.0 >Reporter: Adam Antal >Assignee: Akira Ajisaka >Priority: Major > Attachments: YARN-9743.001.patch, YARN-9743.002.patch > > > Tested on OpenJDK 11.0.2 on a Mac. > Stack trace: > {noformat} > [ERROR] Tests run: 29, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: > 36.016 s <<< FAILURE! - in > org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices > [ERROR] > testContextFactory(org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices) > Time elapsed: 1.031 s <<< ERROR! > java.lang.ClassNotFoundException: com.sun.xml.internal.bind.v2.ContextFactory > at > java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583) > at > java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) > at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) > at java.base/java.lang.Class.forName0(Native Method) > at java.base/java.lang.Class.forName(Class.java:315) > at > org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.ContextFactory.newContext(ContextFactory.java:85) > at > org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.ContextFactory.createContext(ContextFactory.java:112) > at > org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices.testContextFactory(TestTimelineWebServices.java:1039) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-10043) FairOrderingPolicy Improvements
[ https://issues.apache.org/jira/browse/YARN-10043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025899#comment-17025899 ] Peter Bacsko edited comment on YARN-10043 at 1/29/20 2:06 PM: -- Some comments from me: 1. Nit: Pay attention to missing white spaces, like {{if(res) == 0}} (after "if") 2. {{compareDemand()}} can be simplified: {noformat} return (int) Math.signum(demand2 - demand1); {noformat} 3. {{testOrderingUsingAppSubmitTime()}} has multiple asserts. I'd prefer having separate test cases for better readability. Examples: * testOrderingWithoutUsedAndPendingResources * testOrderingWithUsedAndPendingResources * testOrderingWithSubmissionTime 4. Same applies to {{testOrderingUsingAppDemand()}}. Could be split up like: * testOrderingWithZeroDemand * testOrderingWithSameStartTimeDifferentDemand * Also, "//No changes, equal" part is the same as in {{testOrderingUsingAppSubmitTime()}} was (Author: pbacsko): Some comments from me: 1. Nit: Pay attention to missing white spaces, like {{if(res) == 0}} (after "if") 2. {{compareDemand()}} can be simplified: {noformat} return (int) Math.signum(demand2 - demand1); {noformat} 3. {{testOrderingUsingAppSubmitTime()}} has multiple asserts. I'd prefer having separate test cases for better readability. Examples: * testOrderingWithoutUsedAndPendingResources * testOrderingWithUsedAndPendingResource * testOrderingWithSubmissionTime 4. Same applies to {{testOrderingUsingAppDemand()}}. Could be split up like: * testOrderingWithZeroDemand * testOrderingWithSameStartTimeDifferentDemand * Also, "//No changes, equal" part is the same as in {{testOrderingUsingAppSubmitTime()}} > FairOrderingPolicy Improvements > --- > > Key: YARN-10043 > URL: https://issues.apache.org/jira/browse/YARN-10043 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Manikandan R >Assignee: Manikandan R >Priority: Major > Attachments: YARN-10043.001.patch, YARN-10043.002.patch > > > FairOrderingPolicy can be improved by using some of the approaches (only > relevant) implemented in FairSharePolicy of FS. This improvement has > significance in FS to CS migration context. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10043) FairOrderingPolicy Improvements
[ https://issues.apache.org/jira/browse/YARN-10043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025899#comment-17025899 ] Peter Bacsko commented on YARN-10043: - Some comments from me: 1. Nit: Pay attention to missing white spaces, like {{if(res) == 0}} (after "if") 2. {{compareDemand()}} can be simplified: {noformat} return (int) Math.signum(demand2 - demand1); {noformat} 3. {{testOrderingUsingAppSubmitTime()}} has multiple asserts. I'd prefer having separate test cases for better readability. Examples: * testOrderingWithoutUsedAndPendingResources * testOrderingWithUsedAndPendingResource * testOrderingWithSubmissionTime 4. Same applies to {{testOrderingUsingAppDemand()}}. Could be split up like: * testOrderingWithZeroDemand * testOrderingWithSameStartTimeDifferentDemand * Also, "//No changes, equal" part is the same as in {{testOrderingUsingAppSubmitTime()}} > FairOrderingPolicy Improvements > --- > > Key: YARN-10043 > URL: https://issues.apache.org/jira/browse/YARN-10043 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Manikandan R >Assignee: Manikandan R >Priority: Major > Attachments: YARN-10043.001.patch, YARN-10043.002.patch > > > FairOrderingPolicy can be improved by using some of the approaches (only > relevant) implemented in FairSharePolicy of FS. This improvement has > significance in FS to CS migration context. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-10111) In Federation cluster Distributed Shell Application submission fails as YarnClient#getQueueInfo is not implemented
Sushanta Sen created YARN-10111: --- Summary: In Federation cluster Distributed Shell Application submission fails as YarnClient#getQueueInfo is not implemented Key: YARN-10111 URL: https://issues.apache.org/jira/browse/YARN-10111 Project: Hadoop YARN Issue Type: Bug Reporter: Sushanta Sen In Federation cluster Distributed Shell Application submission fails as YarnClient#getQueueInfo is not implemented. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10099) FS-CS converter: handle allow-undeclared-pools and user-as-default-queue properly and fix misc issues
[ https://issues.apache.org/jira/browse/YARN-10099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated YARN-10099: Attachment: YARN-10099-006.patch > FS-CS converter: handle allow-undeclared-pools and user-as-default-queue > properly and fix misc issues > - > > Key: YARN-10099 > URL: https://issues.apache.org/jira/browse/YARN-10099 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Labels: fs2cs > Attachments: YARN-10099-001.patch, YARN-10099-002.patch, > YARN-10099-003.patch, YARN-10099-004.patch, YARN-10099-005.patch, > YARN-10099-006.patch > > > This ticket is intended to fix two issues: > 1. Based on the latest documentation, there are two important properties that > are ignored if we have placement rules: > ||Property||Explanation|| > |yarn.scheduler.fair.allow-undeclared-pools|If this is true, new queues can > be created at application submission time, whether because they are specified > as the application’s queue by the submitter or because they are placed there > by the user-as-default-queue property. If this is false, any time an app > would be placed in a queue that is not specified in the allocations file, it > is placed in the “default” queue instead. Defaults to true. *If a queue > placement policy is given in the allocations file, this property is ignored.*| > |yarn.scheduler.fair.user-as-default-queue|Whether to use the username > associated with the allocation as the default queue name, in the event that a > queue name is not specified. If this is set to “false” or unset, all jobs > have a shared default queue, named “default”. Defaults to true. *If a queue > placement policy is given in the allocations file, this property is ignored.*| > Right now these settings affects the conversion regardless of the placement > rules. > 2. A converted configuration throws this error: > {noformat} > 2020-01-27 03:35:35,007 INFO > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned > to standby state > 2020-01-27 03:35:35,008 FATAL > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting > ResourceManager > java.lang.IllegalArgumentException: Illegal queue mapping > u:%user:%user;u:%user:root.users.%user;u:%user:root.default > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getQueueMappings(CapacitySchedulerConfiguration.java:1113) > at > org.apache.hadoop.yarn.server.resourcemanager.placement.UserGroupMappingPlacementRule.initialize(UserGroupMappingPlacementRule.java:244) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.getUserGroupMappingPlacementRule(CapacityScheduler.java:671) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updatePlacementRules(CapacityScheduler.java:712) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initializeQueues(CapacityScheduler.java:753) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initScheduler(CapacityScheduler.java:361) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.serviceInit(CapacityScheduler.java:426) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) > at > org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) > {noformat} > Mapping rules should be separated by a "," character, not by a semicolon. > 3. When initializing FS for conversion, we add the current {{yarn-site.xml}} > as resource. This is not necessary. This can cause problems like: > {noformat} > [...] > 20/01/29 02:45:38 ERROR config.RangerConfiguration: > addResourceIfReadable(ranger-yarn-audit.xml): couldn't find resource file > location > 20/01/29 02:45:38 ERROR config.RangerConfiguration: > addResourceIfReadable(ranger-yarn-security.xml): couldn't find resource file > location > 20/01/29 02:45:38 ERROR config.RangerConfiguration: > addResourceIfReadable(ranger-yarn-policymgr-ssl.xml): couldn't find resource > file location > 20/01/29 02:45:38 ERROR conf.Configuration: error parsing conf > file:/etc/hadoop/conf.cloudera.YARN-1/xasecure-audit.xml > java.io.FileNotFoundException: > /etc/hadoop/conf.cloudera.YARN-1/xasecure-audit.xml (No such file or > directory) > at java.base/java.io.FileInputStream.open0(Native Method) > at java.base/java.io.FileInputStream.open(FileInputStream.java:219) > at java.base/java.io.FileInputStream.(FileInputStream.java:157) > at
[jira] [Updated] (YARN-10099) FS-CS converter: handle allow-undeclared-pools and user-as-default-queue properly and fix misc issues
[ https://issues.apache.org/jira/browse/YARN-10099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated YARN-10099: Description: This ticket is intended to fix two issues: 1. Based on the latest documentation, there are two important properties that are ignored if we have placement rules: ||Property||Explanation|| |yarn.scheduler.fair.allow-undeclared-pools|If this is true, new queues can be created at application submission time, whether because they are specified as the application’s queue by the submitter or because they are placed there by the user-as-default-queue property. If this is false, any time an app would be placed in a queue that is not specified in the allocations file, it is placed in the “default” queue instead. Defaults to true. *If a queue placement policy is given in the allocations file, this property is ignored.*| |yarn.scheduler.fair.user-as-default-queue|Whether to use the username associated with the allocation as the default queue name, in the event that a queue name is not specified. If this is set to “false” or unset, all jobs have a shared default queue, named “default”. Defaults to true. *If a queue placement policy is given in the allocations file, this property is ignored.*| Right now these settings affects the conversion regardless of the placement rules. 2. A converted configuration throws this error: {noformat} 2020-01-27 03:35:35,007 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned to standby state 2020-01-27 03:35:35,008 FATAL org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting ResourceManager java.lang.IllegalArgumentException: Illegal queue mapping u:%user:%user;u:%user:root.users.%user;u:%user:root.default at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getQueueMappings(CapacitySchedulerConfiguration.java:1113) at org.apache.hadoop.yarn.server.resourcemanager.placement.UserGroupMappingPlacementRule.initialize(UserGroupMappingPlacementRule.java:244) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.getUserGroupMappingPlacementRule(CapacityScheduler.java:671) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updatePlacementRules(CapacityScheduler.java:712) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initializeQueues(CapacityScheduler.java:753) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initScheduler(CapacityScheduler.java:361) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.serviceInit(CapacityScheduler.java:426) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) {noformat} Mapping rules should be separated by a "," character, not by a semicolon. 3. When initializing FS for conversion, we add the current {{yarn-site.xml}} as resource. This is not necessary. This can cause problems like: {noformat} [...] 20/01/29 02:45:38 ERROR config.RangerConfiguration: addResourceIfReadable(ranger-yarn-audit.xml): couldn't find resource file location 20/01/29 02:45:38 ERROR config.RangerConfiguration: addResourceIfReadable(ranger-yarn-security.xml): couldn't find resource file location 20/01/29 02:45:38 ERROR config.RangerConfiguration: addResourceIfReadable(ranger-yarn-policymgr-ssl.xml): couldn't find resource file location 20/01/29 02:45:38 ERROR conf.Configuration: error parsing conf file:/etc/hadoop/conf.cloudera.YARN-1/xasecure-audit.xml java.io.FileNotFoundException: /etc/hadoop/conf.cloudera.YARN-1/xasecure-audit.xml (No such file or directory) at java.base/java.io.FileInputStream.open0(Native Method) at java.base/java.io.FileInputStream.open(FileInputStream.java:219) at java.base/java.io.FileInputStream.(FileInputStream.java:157) at java.base/java.io.FileInputStream.(FileInputStream.java:112) at java.base/sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:86) at java.base/sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:184) at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2966) at org.apache.hadoop.conf.Configuration.getStreamReader(Configuration.java:3057) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3018) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2996) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2871) at org.apache.hadoop.conf.Configuration.get(Configuration.java:1223) at
[jira] [Updated] (YARN-10099) FS-CS converter: handle allow-undeclared-pools and user-as-default-queue properly and fix misc issues
[ https://issues.apache.org/jira/browse/YARN-10099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated YARN-10099: Description: This ticket is intended to fix two issues: 1. Based on the latest documentation, there are two important properties that are ignored if we have placement rules: ||Property||Explanation|| |yarn.scheduler.fair.allow-undeclared-pools|If this is true, new queues can be created at application submission time, whether because they are specified as the application’s queue by the submitter or because they are placed there by the user-as-default-queue property. If this is false, any time an app would be placed in a queue that is not specified in the allocations file, it is placed in the “default” queue instead. Defaults to true. *If a queue placement policy is given in the allocations file, this property is ignored.*| |yarn.scheduler.fair.user-as-default-queue|Whether to use the username associated with the allocation as the default queue name, in the event that a queue name is not specified. If this is set to “false” or unset, all jobs have a shared default queue, named “default”. Defaults to true. *If a queue placement policy is given in the allocations file, this property is ignored.*| Right now these settings affects the conversion regardless of the placement rules. 2. A converted configuration throws this error: {noformat} 2020-01-27 03:35:35,007 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned to standby state 2020-01-27 03:35:35,008 FATAL org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting ResourceManager java.lang.IllegalArgumentException: Illegal queue mapping u:%user:%user;u:%user:root.users.%user;u:%user:root.default at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getQueueMappings(CapacitySchedulerConfiguration.java:1113) at org.apache.hadoop.yarn.server.resourcemanager.placement.UserGroupMappingPlacementRule.initialize(UserGroupMappingPlacementRule.java:244) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.getUserGroupMappingPlacementRule(CapacityScheduler.java:671) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updatePlacementRules(CapacityScheduler.java:712) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initializeQueues(CapacityScheduler.java:753) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initScheduler(CapacityScheduler.java:361) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.serviceInit(CapacityScheduler.java:426) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) {noformat} Mapping rules should be separated by a "," character, not by a semicolon. 3. When initializing FS for conversion, we add the current {{yarn-site.xml}} as resource. This is not necessary. This can cause problems like: {noformat} [...] 1.cdh7.1.1.p0.1825944/lib/hadoop/lib/ranger-yarn-plugin-impl/gethostname4j-0.0.2.jar 20/01/29 02:45:38 ERROR config.RangerConfiguration: addResourceIfReadable(ranger-yarn-audit.xml): couldn't find resource file location 20/01/29 02:45:38 ERROR config.RangerConfiguration: addResourceIfReadable(ranger-yarn-security.xml): couldn't find resource file location 20/01/29 02:45:38 ERROR config.RangerConfiguration: addResourceIfReadable(ranger-yarn-policymgr-ssl.xml): couldn't find resource file location 20/01/29 02:45:38 ERROR conf.Configuration: error parsing conf file:/etc/hadoop/conf.cloudera.YARN-1/xasecure-audit.xml java.io.FileNotFoundException: /etc/hadoop/conf.cloudera.YARN-1/xasecure-audit.xml (No such file or directory) at java.base/java.io.FileInputStream.open0(Native Method) at java.base/java.io.FileInputStream.open(FileInputStream.java:219) at java.base/java.io.FileInputStream.(FileInputStream.java:157) at java.base/java.io.FileInputStream.(FileInputStream.java:112) at java.base/sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:86) at java.base/sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:184) at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2966) at org.apache.hadoop.conf.Configuration.getStreamReader(Configuration.java:3057) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3018) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2996) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2871) at
[jira] [Updated] (YARN-10099) FS-CS converter: handle allow-undeclared-pools and user-as-default-queue properly and fix misc issues
[ https://issues.apache.org/jira/browse/YARN-10099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated YARN-10099: Summary: FS-CS converter: handle allow-undeclared-pools and user-as-default-queue properly and fix misc issues (was: FS-CS converter: handle allow-undeclared-pools and user-as-default-queue properly and fix mapping rule separator) > FS-CS converter: handle allow-undeclared-pools and user-as-default-queue > properly and fix misc issues > - > > Key: YARN-10099 > URL: https://issues.apache.org/jira/browse/YARN-10099 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Labels: fs2cs > Attachments: YARN-10099-001.patch, YARN-10099-002.patch, > YARN-10099-003.patch, YARN-10099-004.patch, YARN-10099-005.patch > > > This ticket is intended to fix two issues: > 1. Based on the latest documentation, there are two important properties that > are ignored if we have placement rules: > ||Property||Explanation|| > |yarn.scheduler.fair.allow-undeclared-pools|If this is true, new queues can > be created at application submission time, whether because they are specified > as the application’s queue by the submitter or because they are placed there > by the user-as-default-queue property. If this is false, any time an app > would be placed in a queue that is not specified in the allocations file, it > is placed in the “default” queue instead. Defaults to true. *If a queue > placement policy is given in the allocations file, this property is ignored.*| > |yarn.scheduler.fair.user-as-default-queue|Whether to use the username > associated with the allocation as the default queue name, in the event that a > queue name is not specified. If this is set to “false” or unset, all jobs > have a shared default queue, named “default”. Defaults to true. *If a queue > placement policy is given in the allocations file, this property is ignored.*| > Right now these settings affects the conversion regardless of the placement > rules. > 2. A converted configuration throws this error: > {noformat} > 2020-01-27 03:35:35,007 INFO > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned > to standby state > 2020-01-27 03:35:35,008 FATAL > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting > ResourceManager > java.lang.IllegalArgumentException: Illegal queue mapping > u:%user:%user;u:%user:root.users.%user;u:%user:root.default > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getQueueMappings(CapacitySchedulerConfiguration.java:1113) > at > org.apache.hadoop.yarn.server.resourcemanager.placement.UserGroupMappingPlacementRule.initialize(UserGroupMappingPlacementRule.java:244) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.getUserGroupMappingPlacementRule(CapacityScheduler.java:671) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updatePlacementRules(CapacityScheduler.java:712) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initializeQueues(CapacityScheduler.java:753) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initScheduler(CapacityScheduler.java:361) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.serviceInit(CapacityScheduler.java:426) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) > at > org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) > {noformat} > Mapping rules should be separated by a "," character, not by a semicolon. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9624) Use switch case for ProtoUtils#convertFromProtoFormat containerState
[ https://issues.apache.org/jira/browse/YARN-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025852#comment-17025852 ] Hadoop QA commented on YARN-9624: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 39s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 38s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 60m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-9624 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12992114/YARN-9624.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 017eb08a54e8 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 825db8f | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_232 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25463/testReport/ | | Max. process+thread count | 306 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/25463/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Use switch case for ProtoUtils#convertFromProtoFormat containerState > > >
[jira] [Updated] (YARN-9624) Use switch case for ProtoUtils#convertFromProtoFormat containerState
[ https://issues.apache.org/jira/browse/YARN-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilwa S T updated YARN-9624: Attachment: YARN-9624.003.patch > Use switch case for ProtoUtils#convertFromProtoFormat containerState > > > Key: YARN-9624 > URL: https://issues.apache.org/jira/browse/YARN-9624 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Bibin Chundatt >Assignee: Bilwa S T >Priority: Major > Labels: performance > Attachments: YARN-9624.001.patch, YARN-9624.002.patch, > YARN-9624.003.patch > > > On large cluster with 100K+ containers on every heartbeat > {{ContainerState.valueOf(e.name().replace(CONTAINER_STATE_PREFIX, ""))}} will > be too costly. Update with switch case. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10095) Fix help message for yarn rmadmin
[ https://issues.apache.org/jira/browse/YARN-10095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025813#comment-17025813 ] Hadoop QA commented on YARN-10095: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 3m 17s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 0 new + 84 unchanged - 4 fixed = 84 total (was 88) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 31s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 26m 19s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 65m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-10095 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12992099/YARN-10095.000.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6ce0fe1c38c5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 825db8f | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_232 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25462/testReport/ | | Max. process+thread count | 556 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/25462/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Fix help message for yarn
[jira] [Commented] (YARN-10110) In Federation Secure cluster Application submission fails when authorization is enabled
[ https://issues.apache.org/jira/browse/YARN-10110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025808#comment-17025808 ] Hadoop QA commented on YARN-10110: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 47s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 16s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router: The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 46s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 53s{color} | {color:green} hadoop-yarn-server-router in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 68m 39s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router | | | Possible doublecheck on org.apache.hadoop.yarn.server.router.security.authorize.RouterPolicyProvider.routerPolicyProvider in org.apache.hadoop.yarn.server.router.security.authorize.RouterPolicyProvider.getInstance() At RouterPolicyProvider.java:org.apache.hadoop.yarn.server.router.security.authorize.RouterPolicyProvider.getInstance() At RouterPolicyProvider.java:[lines 41-43] | | | org.apache.hadoop.yarn.server.router.security.authorize.RouterPolicyProvider.getServices() may expose internal representation by returning RouterPolicyProvider.routerServices At RouterPolicyProvider.java:by returning RouterPolicyProvider.routerServices At RouterPolicyProvider.java:[line 61] | \\ \\ || Subsystem || Report/Notes || |
[jira] [Commented] (YARN-10109) Allow stop and convert from leaf to parent queue in a single Mutation API call
[ https://issues.apache.org/jira/browse/YARN-10109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025792#comment-17025792 ] Prabhu Joseph commented on YARN-10109: -- Thanks [~kmarton] for reviewing. It will log the error message in all other scenarios as well. Is it fine to add debug log. > Allow stop and convert from leaf to parent queue in a single Mutation API call > -- > > Key: YARN-10109 > URL: https://issues.apache.org/jira/browse/YARN-10109 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: YARN-10109-001.patch, YARN-10109-002.patch > > > SchedulerConf Mutation API does not Allow Stop and Adding queue under an > existing Leaf Queue in a single call. > *Repro:* > > {code:java} > Capacity-Scheduler.xml: > yarn.scheduler.capacity.root.queues = default > yarn.scheduler.capacity.root.default.capacity = 100 > cat abc.xml > > > root.default.v1 > > > capacity > 100 > > > > > root.default > > > state > STOPPED > > > > > [yarn@pjoseph-1 tmp]$ curl --negotiate -u : -X PUT -d @add.xml -H > "Content-type: application/xml" > 'http://:8088/ws/v1/cluster/scheduler-conf?user.name=yarn' > Failed to re-init queues : Can not convert the leaf queue: root.default to > parent queue since it is not yet in stopped state. Current State : RUNNING > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9743) [JDK11] TestTimelineWebServices.testContextFactory fails
[ https://issues.apache.org/jira/browse/YARN-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025764#comment-17025764 ] Kinga Marton commented on YARN-9743: Sorry, ignore my previous comment. I can see the precommit results in your PR. > [JDK11] TestTimelineWebServices.testContextFactory fails > > > Key: YARN-9743 > URL: https://issues.apache.org/jira/browse/YARN-9743 > Project: Hadoop YARN > Issue Type: Bug > Components: timelineservice >Affects Versions: 3.2.0 >Reporter: Adam Antal >Assignee: Akira Ajisaka >Priority: Major > Attachments: YARN-9743.001.patch, YARN-9743.002.patch > > > Tested on OpenJDK 11.0.2 on a Mac. > Stack trace: > {noformat} > [ERROR] Tests run: 29, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: > 36.016 s <<< FAILURE! - in > org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices > [ERROR] > testContextFactory(org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices) > Time elapsed: 1.031 s <<< ERROR! > java.lang.ClassNotFoundException: com.sun.xml.internal.bind.v2.ContextFactory > at > java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583) > at > java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) > at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) > at java.base/java.lang.Class.forName0(Native Method) > at java.base/java.lang.Class.forName(Class.java:315) > at > org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.ContextFactory.newContext(ContextFactory.java:85) > at > org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.ContextFactory.createContext(ContextFactory.java:112) > at > org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices.testContextFactory(TestTimelineWebServices.java:1039) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10109) Allow stop and convert from leaf to parent queue in a single Mutation API call
[ https://issues.apache.org/jira/browse/YARN-10109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025760#comment-17025760 ] Kinga Marton commented on YARN-10109: - Thank you for reporting fixing this [~prabhujoseph]. I would have a small comment to the following code: {code:java} try { newQueueState = QueueState.valueOf( newConf.get(configPrefix + "state")); } catch (Exception ex) { // ignore the exception as the config state is optional133 } {code} It is never a good idea to just suppress/ignore an Exception. I think at least we should log a message. > Allow stop and convert from leaf to parent queue in a single Mutation API call > -- > > Key: YARN-10109 > URL: https://issues.apache.org/jira/browse/YARN-10109 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: YARN-10109-001.patch, YARN-10109-002.patch > > > SchedulerConf Mutation API does not Allow Stop and Adding queue under an > existing Leaf Queue in a single call. > *Repro:* > > {code:java} > Capacity-Scheduler.xml: > yarn.scheduler.capacity.root.queues = default > yarn.scheduler.capacity.root.default.capacity = 100 > cat abc.xml > > > root.default.v1 > > > capacity > 100 > > > > > root.default > > > state > STOPPED > > > > > [yarn@pjoseph-1 tmp]$ curl --negotiate -u : -X PUT -d @add.xml -H > "Content-type: application/xml" > 'http://:8088/ws/v1/cluster/scheduler-conf?user.name=yarn' > Failed to re-init queues : Can not convert the leaf queue: root.default to > parent queue since it is not yet in stopped state. Current State : RUNNING > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10110) In Federation Secure cluster Application submission fails when authorization is enabled
[ https://issues.apache.org/jira/browse/YARN-10110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025755#comment-17025755 ] Bilwa S T commented on YARN-10110: -- [~bibinchundatt] Could you please help to review > In Federation Secure cluster Application submission fails when authorization > is enabled > --- > > Key: YARN-10110 > URL: https://issues.apache.org/jira/browse/YARN-10110 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Sushanta Sen >Assignee: Bilwa S T >Priority: Blocker > Attachments: YARN-10110.001.patch > > > 【Precondition】: > 1. Secure Federated cluster is available > 2. Add the below configuration in Router and client core-site.xml > hadoop.security.authorization= true > 3. Restart the router service > 【Test step】: > 1. Go to router client bin path and submit a MR PI job > 2. Observe the client console screen > 【Expect Output】: > No error should be thrown and Job should be successful > 【Actual Output】: > Job failed prompting "Protocol interface > org.apache.hadoop.yarn.api.ApplicationClientProtocolPB is not known.," > 【Additional Note】: > But on setting the parameter as false, job is submitted and success. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10110) In Federation Secure cluster Application submission fails when authorization is enabled
[ https://issues.apache.org/jira/browse/YARN-10110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilwa S T updated YARN-10110: - Attachment: YARN-10110.001.patch > In Federation Secure cluster Application submission fails when authorization > is enabled > --- > > Key: YARN-10110 > URL: https://issues.apache.org/jira/browse/YARN-10110 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Sushanta Sen >Assignee: Bilwa S T >Priority: Blocker > Attachments: YARN-10110.001.patch > > > 【Precondition】: > 1. Secure Federated cluster is available > 2. Add the below configuration in Router and client core-site.xml > hadoop.security.authorization= true > 3. Restart the router service > 【Test step】: > 1. Go to router client bin path and submit a MR PI job > 2. Observe the client console screen > 【Expect Output】: > No error should be thrown and Job should be successful > 【Actual Output】: > Job failed prompting "Protocol interface > org.apache.hadoop.yarn.api.ApplicationClientProtocolPB is not known.," > 【Additional Note】: > But on setting the parameter as false, job is submitted and success. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10107) Invoking NMWebServices#getNMResourceInfo tries to execute gpu discovery binary even if auto discovery is turned off
[ https://issues.apache.org/jira/browse/YARN-10107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025728#comment-17025728 ] Szilard Nemeth commented on YARN-10107: --- Thanks [~prabhujoseph]. > Invoking NMWebServices#getNMResourceInfo tries to execute gpu discovery > binary even if auto discovery is turned off > --- > > Key: YARN-10107 > URL: https://issues.apache.org/jira/browse/YARN-10107 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-10107.001.patch, nm-config-afterchange-gpu.xml, > nm-config-beforechange-gpu.xml.xml, > request-response-afterchange-with-autodiscovery.txt, > request-response-afterchange.txt, request-response-beforechange.txt > > > During internal end-to-end testing, I found the following issue: > Configuration: > - GPU is enabled > - yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables is set > to "/usr/bin/ls" - Any existing valid binary file > - yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices is set to > "0:0,1:1,2:2", so auto-discovery is turned off. > If REST endpoint > [http://quasar-tsjqpq-3.vpc.cloudera.com:8042/ws/v1/node/resources/yarn.io%2Fgpu] > is called, the following exception is thrown in NM: > {code:java} > 2020-01-23 07:55:24,803 ERROR > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin: > Failed to find GPU discovery executable, please double check > yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables setting. > org.apache.hadoop.yarn.exceptions.YarnException: Failed to find GPU discovery > executable, please double check > yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables setting. > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.NvidiaBinaryHelper.getGpuDeviceInformation(NvidiaBinaryHelper.java:54) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuDiscoverer.getGpuDeviceInformation(GpuDiscoverer.java:125) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin.getNMResourceInfo(GpuResourcePlugin.java:104) > at > org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices.getNMResourceInfo(NMWebServices.java:515) > {code} > *Let's break this down:* > 1. > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin#getNMResourceInfo > just calls to the > {code:java} > gpuDeviceInformation = gpuDiscoverer.getGpuDeviceInformation(); > {code} > 2. In > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuDiscoverer#getGpuDeviceInformation, > the following calls to the NvidiaBinaryHelper.getGpuDeviceInformation: > {code:java} > try { > lastDiscoveredGpuInformation = > nvidiaBinaryHelper.getGpuDeviceInformation(pathOfGpuBinary); > } catch (IOException e) { > {code} > 3. > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.NvidiaBinaryHelper#getGpuDeviceInformation > finally throws the exception. > This is only happens in case of the parameter called "pathOfGpuBinary" is > null. > Since this method is only called from GpuDiscoverer#getGpuDeviceInformation, > that passes it's field called "pathOfGpuBinary" as the only one parameter, we > can be sure if this field is null, then we have the exception. > 4. The only method that can set the "pathOfGpuBinary" fields is with this > call chain: > {code:java} > GpuDiscoverer.lookUpAutoDiscoveryBinary(Configuration) > (org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu) > GpuDiscoverer.initialize(Configuration, NvidiaBinaryHelper) > (org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu) > {code} > 5. GpuDiscoverer#initialize contains this code: > {code:java} > if (isAutoDiscoveryEnabled()) { > numOfErrorExecutionSinceLastSucceed = 0; > lookUpAutoDiscoveryBinary(config); > > {code} > , so > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuDiscoverer#pathOfGpuBinary > is set ONLY IF auto discovery is enabled. > Since our tests don't have auto discovery enabled, we have this exception. > In this sense, the exception message is very misleading for me: > {code:java} > Failed to find GPU discovery executable, please double check > yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables setting. > {code} > > Related jira: https://issues.apache.org/jira/browse/YARN-9337 > I think this exception message is very misleading and of
[jira] [Commented] (YARN-9743) [JDK11] TestTimelineWebServices.testContextFactory fails
[ https://issues.apache.org/jira/browse/YARN-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025702#comment-17025702 ] Kinga Marton commented on YARN-9743: Thank you [~aajisaka] for taking this over. Can you please upload your patch here, since we are not moved yet to pull request and the precommit check can pick only the patch files attached. > [JDK11] TestTimelineWebServices.testContextFactory fails > > > Key: YARN-9743 > URL: https://issues.apache.org/jira/browse/YARN-9743 > Project: Hadoop YARN > Issue Type: Bug > Components: timelineservice >Affects Versions: 3.2.0 >Reporter: Adam Antal >Assignee: Akira Ajisaka >Priority: Major > Attachments: YARN-9743.001.patch, YARN-9743.002.patch > > > Tested on OpenJDK 11.0.2 on a Mac. > Stack trace: > {noformat} > [ERROR] Tests run: 29, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: > 36.016 s <<< FAILURE! - in > org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices > [ERROR] > testContextFactory(org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices) > Time elapsed: 1.031 s <<< ERROR! > java.lang.ClassNotFoundException: com.sun.xml.internal.bind.v2.ContextFactory > at > java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583) > at > java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) > at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) > at java.base/java.lang.Class.forName0(Native Method) > at java.base/java.lang.Class.forName(Class.java:315) > at > org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.ContextFactory.newContext(ContextFactory.java:85) > at > org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.ContextFactory.createContext(ContextFactory.java:112) > at > org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices.testContextFactory(TestTimelineWebServices.java:1039) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10107) Invoking NMWebServices#getNMResourceInfo tries to execute gpu discovery binary even if auto discovery is turned off
[ https://issues.apache.org/jira/browse/YARN-10107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025684#comment-17025684 ] Hudson commented on YARN-10107: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17915 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17915/]) YARN-10107. Fix GpuResourcePlugin#getNMResourceInfo to honor Auto (pjoseph: rev 825db8fe2ab37bd5a9a54485ea9ecbabf3766ed6) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/gpu/GpuResourcePlugin.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/gpu/TestGpuResourcePlugin.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/gpu/GpuDiscoverer.java > Invoking NMWebServices#getNMResourceInfo tries to execute gpu discovery > binary even if auto discovery is turned off > --- > > Key: YARN-10107 > URL: https://issues.apache.org/jira/browse/YARN-10107 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-10107.001.patch, nm-config-afterchange-gpu.xml, > nm-config-beforechange-gpu.xml.xml, > request-response-afterchange-with-autodiscovery.txt, > request-response-afterchange.txt, request-response-beforechange.txt > > > During internal end-to-end testing, I found the following issue: > Configuration: > - GPU is enabled > - yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables is set > to "/usr/bin/ls" - Any existing valid binary file > - yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices is set to > "0:0,1:1,2:2", so auto-discovery is turned off. > If REST endpoint > [http://quasar-tsjqpq-3.vpc.cloudera.com:8042/ws/v1/node/resources/yarn.io%2Fgpu] > is called, the following exception is thrown in NM: > {code:java} > 2020-01-23 07:55:24,803 ERROR > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin: > Failed to find GPU discovery executable, please double check > yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables setting. > org.apache.hadoop.yarn.exceptions.YarnException: Failed to find GPU discovery > executable, please double check > yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables setting. > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.NvidiaBinaryHelper.getGpuDeviceInformation(NvidiaBinaryHelper.java:54) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuDiscoverer.getGpuDeviceInformation(GpuDiscoverer.java:125) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin.getNMResourceInfo(GpuResourcePlugin.java:104) > at > org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices.getNMResourceInfo(NMWebServices.java:515) > {code} > *Let's break this down:* > 1. > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin#getNMResourceInfo > just calls to the > {code:java} > gpuDeviceInformation = gpuDiscoverer.getGpuDeviceInformation(); > {code} > 2. In > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuDiscoverer#getGpuDeviceInformation, > the following calls to the NvidiaBinaryHelper.getGpuDeviceInformation: > {code:java} > try { > lastDiscoveredGpuInformation = > nvidiaBinaryHelper.getGpuDeviceInformation(pathOfGpuBinary); > } catch (IOException e) { > {code} > 3. > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.NvidiaBinaryHelper#getGpuDeviceInformation > finally throws the exception. > This is only happens in case of the parameter called "pathOfGpuBinary" is > null. > Since this method is only called from GpuDiscoverer#getGpuDeviceInformation, > that passes it's field called "pathOfGpuBinary" as the only one parameter, we > can be sure if this field is null, then we have the exception. > 4. The only method that can set the "pathOfGpuBinary" fields is with this > call chain: > {code:java} > GpuDiscoverer.lookUpAutoDiscoveryBinary(Configuration) > (org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu) > GpuDiscoverer.initialize(Configuration, NvidiaBinaryHelper) >
[jira] [Commented] (YARN-10107) Invoking NMWebServices#getNMResourceInfo tries to execute gpu discovery binary even if auto discovery is turned off
[ https://issues.apache.org/jira/browse/YARN-10107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025677#comment-17025677 ] Prabhu Joseph commented on YARN-10107: -- Thank you [~snemeth] for the patch, [~pbacsko] for the review. +1 for [^YARN-10107.001.patch] . Have just committed this to trunk. > Invoking NMWebServices#getNMResourceInfo tries to execute gpu discovery > binary even if auto discovery is turned off > --- > > Key: YARN-10107 > URL: https://issues.apache.org/jira/browse/YARN-10107 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Major > Attachments: YARN-10107.001.patch, nm-config-afterchange-gpu.xml, > nm-config-beforechange-gpu.xml.xml, > request-response-afterchange-with-autodiscovery.txt, > request-response-afterchange.txt, request-response-beforechange.txt > > > During internal end-to-end testing, I found the following issue: > Configuration: > - GPU is enabled > - yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables is set > to "/usr/bin/ls" - Any existing valid binary file > - yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices is set to > "0:0,1:1,2:2", so auto-discovery is turned off. > If REST endpoint > [http://quasar-tsjqpq-3.vpc.cloudera.com:8042/ws/v1/node/resources/yarn.io%2Fgpu] > is called, the following exception is thrown in NM: > {code:java} > 2020-01-23 07:55:24,803 ERROR > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin: > Failed to find GPU discovery executable, please double check > yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables setting. > org.apache.hadoop.yarn.exceptions.YarnException: Failed to find GPU discovery > executable, please double check > yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables setting. > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.NvidiaBinaryHelper.getGpuDeviceInformation(NvidiaBinaryHelper.java:54) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuDiscoverer.getGpuDeviceInformation(GpuDiscoverer.java:125) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin.getNMResourceInfo(GpuResourcePlugin.java:104) > at > org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices.getNMResourceInfo(NMWebServices.java:515) > {code} > *Let's break this down:* > 1. > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin#getNMResourceInfo > just calls to the > {code:java} > gpuDeviceInformation = gpuDiscoverer.getGpuDeviceInformation(); > {code} > 2. In > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuDiscoverer#getGpuDeviceInformation, > the following calls to the NvidiaBinaryHelper.getGpuDeviceInformation: > {code:java} > try { > lastDiscoveredGpuInformation = > nvidiaBinaryHelper.getGpuDeviceInformation(pathOfGpuBinary); > } catch (IOException e) { > {code} > 3. > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.NvidiaBinaryHelper#getGpuDeviceInformation > finally throws the exception. > This is only happens in case of the parameter called "pathOfGpuBinary" is > null. > Since this method is only called from GpuDiscoverer#getGpuDeviceInformation, > that passes it's field called "pathOfGpuBinary" as the only one parameter, we > can be sure if this field is null, then we have the exception. > 4. The only method that can set the "pathOfGpuBinary" fields is with this > call chain: > {code:java} > GpuDiscoverer.lookUpAutoDiscoveryBinary(Configuration) > (org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu) > GpuDiscoverer.initialize(Configuration, NvidiaBinaryHelper) > (org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu) > {code} > 5. GpuDiscoverer#initialize contains this code: > {code:java} > if (isAutoDiscoveryEnabled()) { > numOfErrorExecutionSinceLastSucceed = 0; > lookUpAutoDiscoveryBinary(config); > > {code} > , so > org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuDiscoverer#pathOfGpuBinary > is set ONLY IF auto discovery is enabled. > Since our tests don't have auto discovery enabled, we have this exception. > In this sense, the exception message is very misleading for me: > {code:java} > Failed to find GPU discovery executable, please double check > yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables setting. > {code} > > Related jira: