[jira] [Commented] (YARN-2710) RM HA tests failed intermittently on trunk
[ https://issues.apache.org/jira/browse/YARN-2710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17073214#comment-17073214 ] Hadoop QA commented on YARN-2710: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 58s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} branch-3.2 Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 8m 29s{color} | {color:red} root in branch-3.2 failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} branch-3.2 passed {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 5m 33s{color} | {color:red} branch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} branch-3.2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 29s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 1 new + 13 unchanged - 1 fixed = 14 total (was 14) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 6m 21s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 8s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 78m 45s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.client.TestResourceTrackerOnHA | | | hadoop.yarn.client.TestApplicationClientProtocolOnHA | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:0f25cbbb251 | | JIRA Issue | YARN-2710 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12998518/YARN-2710-branch-3.2.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4b8b21ecf3e5 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-3.2 / 6d5f87b | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | mvninstall | https://builds.apache.org/job/PreCommit-YARN-Build/25796/artifact/out/branch-mvninstall-root.txt | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/25796/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt | | unit |
[jira] [Commented] (YARN-2710) RM HA tests failed intermittently on trunk
[ https://issues.apache.org/jira/browse/YARN-2710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17073170#comment-17073170 ] Ahmed Hussein commented on YARN-2710: - [~ebadger] I know you did a tremendous job and I am very appreciative for what you have, still doing, and will do. I created a new Jira YARN-10220 and uploaded patch for branch-3.2. > RM HA tests failed intermittently on trunk > -- > > Key: YARN-2710 > URL: https://issues.apache.org/jira/browse/YARN-2710 > Project: Hadoop YARN > Issue Type: Bug > Components: client > Environment: Java 8, jenkins >Reporter: Wangda Tan >Assignee: Ahmed Hussein >Priority: Major > Fix For: 3.3.0 > > Attachments: TestResourceTrackerOnHA-output.2.txt, > YARN-2710-branch-2.10.001.patch, YARN-2710-branch-2.10.002.patch, > YARN-2710-branch-2.10.003.patch, YARN-2710-branch-3.2.003.patch, > YARN-2710.001.patch, YARN-2710.002.patch, YARN-2710.003.patch, > org.apache.hadoop.yarn.client.TestResourceTrackerOnHA-output.txt > > > Failure like, it can be happened in TestApplicationClientProtocolOnHA, > TestResourceTrackerOnHA, etc. > {code} > org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA > testGetApplicationAttemptsOnHA(org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA) > Time elapsed: 9.491 sec <<< ERROR! > java.net.ConnectException: Call From asf905.gq1.ygridcore.net/67.195.81.149 > to asf905.gq1.ygridcore.net:28032 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) > at > org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493) > at > org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607) > at > org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705) > at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368) > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521) > at org.apache.hadoop.ipc.Client.call(Client.java:1438) > at org.apache.hadoop.ipc.Client.call(Client.java:1399) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) > at com.sun.proxy.$Proxy17.getApplicationAttempts(Unknown Source) > at > org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplicationAttempts(ApplicationClientProtocolPBClientImpl.java:372) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101) > at com.sun.proxy.$Proxy18.getApplicationAttempts(Unknown Source) > at > org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getApplicationAttempts(YarnClientImpl.java:583) > at > org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA.testGetApplicationAttemptsOnHA(TestApplicationClientProtocolOnHA.java:137) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-2710) RM HA tests failed intermittently on trunk
[ https://issues.apache.org/jira/browse/YARN-2710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated YARN-2710: Attachment: YARN-2710-branch-3.2.003.patch > RM HA tests failed intermittently on trunk > -- > > Key: YARN-2710 > URL: https://issues.apache.org/jira/browse/YARN-2710 > Project: Hadoop YARN > Issue Type: Bug > Components: client > Environment: Java 8, jenkins >Reporter: Wangda Tan >Assignee: Ahmed Hussein >Priority: Major > Fix For: 3.3.0 > > Attachments: TestResourceTrackerOnHA-output.2.txt, > YARN-2710-branch-2.10.001.patch, YARN-2710-branch-2.10.002.patch, > YARN-2710-branch-2.10.003.patch, YARN-2710-branch-3.2.003.patch, > YARN-2710.001.patch, YARN-2710.002.patch, YARN-2710.003.patch, > org.apache.hadoop.yarn.client.TestResourceTrackerOnHA-output.txt > > > Failure like, it can be happened in TestApplicationClientProtocolOnHA, > TestResourceTrackerOnHA, etc. > {code} > org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA > testGetApplicationAttemptsOnHA(org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA) > Time elapsed: 9.491 sec <<< ERROR! > java.net.ConnectException: Call From asf905.gq1.ygridcore.net/67.195.81.149 > to asf905.gq1.ygridcore.net:28032 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) > at > org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493) > at > org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607) > at > org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705) > at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368) > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521) > at org.apache.hadoop.ipc.Client.call(Client.java:1438) > at org.apache.hadoop.ipc.Client.call(Client.java:1399) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) > at com.sun.proxy.$Proxy17.getApplicationAttempts(Unknown Source) > at > org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplicationAttempts(ApplicationClientProtocolPBClientImpl.java:372) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101) > at com.sun.proxy.$Proxy18.getApplicationAttempts(Unknown Source) > at > org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getApplicationAttempts(YarnClientImpl.java:583) > at > org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA.testGetApplicationAttemptsOnHA(TestApplicationClientProtocolOnHA.java:137) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-10220) RM HA times out intermittently
Ahmed Hussein created YARN-10220: Summary: RM HA times out intermittently Key: YARN-10220 URL: https://issues.apache.org/jira/browse/YARN-10220 Project: Hadoop YARN Issue Type: Bug Reporter: Ahmed Hussein TestResourceTrackerOnHA Among other tests time out intermittently * TestApplicationClientProtocolOnHA * TestApplicationMasterServiceProtocolForTimelineV2 * TestApplicationMasterServiceProtocolOnHA {code:bash} [INFO] --- maven-surefire-plugin:3.0.0-M1:test (default-test) @ hadoop-yarn-client --- [INFO] [INFO] --- [INFO] T E S T S [INFO] --- [INFO] Running org.apache.hadoop.yarn.client.TestResourceTrackerOnHA [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 19.612 s <<< FAILURE! - in org.apache.hadoop.yarn.client.TestResourceTrackerOnHA [ERROR] testResourceTrackerOnHA(org.apache.hadoop.yarn.client.TestResourceTrackerOnHA) Time elapsed: 19.473 s <<< ERROR! org.junit.runners.model.TestTimedOutException: test timed out after 15000 milliseconds at sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method) at sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198) at sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:336) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:203) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:699) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:812) at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:413) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1636) at org.apache.hadoop.ipc.Client.call(Client.java:1452) at org.apache.hadoop.ipc.Client.call(Client.java:1405) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy93.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:73) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy94.registerNodeManager(Unknown Source) at org.apache.hadoop.yarn.client.TestResourceTrackerOnHA.testResourceTrackerOnHA(TestResourceTrackerOnHA.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:80) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) [INFO] [INFO] Results: [INFO] [ERROR] Errors: [ERROR] TestResourceTrackerOnHA.testResourceTrackerOnHA:64 »
[jira] [Commented] (YARN-10201) Make AMRMProxyPolicy aware of SC load
[ https://issues.apache.org/jira/browse/YARN-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17073114#comment-17073114 ] Young Chen commented on YARN-10201: --- The remaining checkstyle/findbugs issues are due to parameter count in AllocateResponse and PBImpl style / synchronization > Make AMRMProxyPolicy aware of SC load > - > > Key: YARN-10201 > URL: https://issues.apache.org/jira/browse/YARN-10201 > Project: Hadoop YARN > Issue Type: Sub-task > Components: amrmproxy >Reporter: Young Chen >Assignee: Young Chen >Priority: Major > Attachments: YARN-10201.v0.patch, YARN-10201.v1.patch, > YARN-10201.v2.patch, YARN-10201.v3.patch, YARN-10201.v4.patch, > YARN-10201.v5.patch > > > LocalityMulticastAMRMProxyPolicy is currently unaware of SC load when > splitting resource requests. We propose changes to the policy so that it > receives feedback from SCs and can load balance requests across the federated > cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Moved] (YARN-10219) YARN service placement constraints is broken
[ https://issues.apache.org/jira/browse/YARN-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Yang moved HIVE-23125 to YARN-10219: - Key: YARN-10219 (was: HIVE-23125) Issue Type: Bug (was: Task) Project: Hadoop YARN (was: Hive) > YARN service placement constraints is broken > > > Key: YARN-10219 > URL: https://issues.apache.org/jira/browse/YARN-10219 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Eric Yang >Priority: Major > > YARN service placement constraint does not work with node label nor node > attributes. Example of placement constraints: > {code} > "placement_policy": { > "constraints": [ > { > "type": "AFFINITY", > "scope": "NODE", > "node_attributes": { > "label":["genfile"] > }, > "target_tags": [ > "ping" > ] > } > ] > }, > {code} > Node attribute added: > {code} ./bin/yarn nodeattributes -add "host-3.example.com:label=genfile" > {code} > Scheduling activities shows: > {code} Node does not match partition or placement constraints, > unsatisfied PC expression="in,node,ping", target-type=ALLOCATION_TAG > > 1 > host-3.example.com:45454{code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10215) Endpoint for obtaining direct URL for the logs
[ https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17073024#comment-17073024 ] Hadoop QA commented on YARN-10215: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 54s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 4s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 58s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 29s{color} | {color:orange} root: The patch generated 12 new + 22 unchanged - 3 fixed = 34 total (was 25) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 34s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 37s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 36s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 50s{color} | {color:green} hadoop-mapreduce-client-hs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 47s{color} | {color:green} hadoop-yarn-ui in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} |
[jira] [Commented] (YARN-10215) Endpoint for obtaining direct URL for the logs
[ https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17072898#comment-17072898 ] Andras Gyori commented on YARN-10215: - Thank you [~adam.antal] for the review. I have addressed your concerns with the following additions: # A manual_redirection query parameter has been added to the corresponding endpoints. It is false by default, and if turned on, the automatic redirect (a 307 response) is swapped with a 206 response, that could be used to handle the redirection manually (as the UI does it). # I think a null check is already in place in the handleResponse function. # The createEmptyContainerLogInfo has been extracted to a helper file for common use. > Endpoint for obtaining direct URL for the logs > -- > > Key: YARN-10215 > URL: https://issues.apache.org/jira/browse/YARN-10215 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Andras Gyori >Priority: Major > Attachments: YARN-10025.001.patch, YARN-10025.002.patch > > > If CORS protected UIs are set up, there is an issue when the browser tries to > access the logs of a running container in the RM web UIv2. > Assuming ATS is not up, the browser follows the following call chain: > - Tries to access ATS, it fails, falls back to JHS > - From RM the browser received basic app info, we know that the application > is running > - From the JHS we got the list of containers and their log files. > - When we try to access a specific log file, the JHS redirects the request to > the NM's UI (on which node the container is running). This redirect is > performed by the browser automatically. In this setup the host is considered > as a protected information, thus the browser omits the "Origin" field from > the request when this redirect is done. The browser then denies access to the > NodeManager's web UI due to the CORS header set up for NM, but the Origin is > null in the redirect request. > - Finally, "Logs are unavailable" message is shown in the RM web UIv2 due to > the CORS violation. > We should fix this. As an approach we can expose another endpoints which only > returns the URL of the NodeManager what we should call directly from the UIv2 > in order to receive the log. This adds a bit of a complexity, but will enable > users to keep the CORS protected setup. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10215) Endpoint for obtaining direct URL for the logs
[ https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Gyori updated YARN-10215: Attachment: YARN-10025.002.patch > Endpoint for obtaining direct URL for the logs > -- > > Key: YARN-10215 > URL: https://issues.apache.org/jira/browse/YARN-10215 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Andras Gyori >Priority: Major > Attachments: YARN-10025.001.patch, YARN-10025.002.patch > > > If CORS protected UIs are set up, there is an issue when the browser tries to > access the logs of a running container in the RM web UIv2. > Assuming ATS is not up, the browser follows the following call chain: > - Tries to access ATS, it fails, falls back to JHS > - From RM the browser received basic app info, we know that the application > is running > - From the JHS we got the list of containers and their log files. > - When we try to access a specific log file, the JHS redirects the request to > the NM's UI (on which node the container is running). This redirect is > performed by the browser automatically. In this setup the host is considered > as a protected information, thus the browser omits the "Origin" field from > the request when this redirect is done. The browser then denies access to the > NodeManager's web UI due to the CORS header set up for NM, but the Origin is > null in the redirect request. > - Finally, "Logs are unavailable" message is shown in the RM web UIv2 due to > the CORS violation. > We should fix this. As an approach we can expose another endpoints which only > returns the URL of the NodeManager what we should call directly from the UIv2 > in order to receive the log. This adds a bit of a complexity, but will enable > users to keep the CORS protected setup. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10208) Add metric in CapacityScheduler for evaluating the time difference between node heartbeats
[ https://issues.apache.org/jira/browse/YARN-10208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17072873#comment-17072873 ] Adam Antal commented on YARN-10208: --- Thanks for the patch [~lapjarn]. Generally it looks good to me. Minor nit: Could you please rename the function {{updateMetrics}} in {{CapacityScheduler}}? It's too broad and could easily mislead the reader of the code. > Add metric in CapacityScheduler for evaluating the time difference between > node heartbeats > -- > > Key: YARN-10208 > URL: https://issues.apache.org/jira/browse/YARN-10208 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Pranjal Protim Borah >Assignee: Pranjal Protim Borah >Priority: Minor > Attachments: YARN-10208.001.patch, YARN-10208.002.patch, > YARN-10208.003.patch, YARN-10208.004.patch > > > Metric measuring average time interval between node heartbeats in capacity > scheduler on node update event. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10189) Code cleanup in LeveldbRMStateStore
[ https://issues.apache.org/jira/browse/YARN-10189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17072804#comment-17072804 ] Benjamin Teke commented on YARN-10189: -- Hi [~adam.antal], Thanks for reviewing the patch. Uploaded a new one. DBManager now implements Closeable. Based on our offline discussion, the existing unittests in the classes which use DBManager should be enough for now, because the actual functionality didn't change and it was thoroughly tested before. For the Consumer generalization: I haven't found any similar JniDBFactory.factory uses so to my mind it's currently not worth it. Removed the unnecessary whitespace changes, but also based on our offline discussion the =null cases can fit under "Any other cleanup". I also tested the functionality with the pseudo distributed mode. I compiled and launched the patched Hadoop, run an example mapreduce PI job with the correct ResourceManager recovery configuration (recovery enabled and the store class was set to LeveldbRMStateStore), restarted the ResourceManager and the job continued from where it was interrupted. > Code cleanup in LeveldbRMStateStore > --- > > Key: YARN-10189 > URL: https://issues.apache.org/jira/browse/YARN-10189 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Benjamin Teke >Assignee: Benjamin Teke >Priority: Minor > Attachments: YARN-10189.001.patch, YARN-10189.POC001.patch, > YARN-10189.POC002.patch > > > Some things can be improved: > * throws Exception declaration can be removed from > LeveldbRMStateStore.initInternal method > * key variable is redundant in LeveldbRMStateStore.dbStoreVersion > * try can use automatic Resource management in > LeveldbRMStateStore.loadReservationState/loadRMDTSecretManagerKeys/loadRMDTSecretManagerTokens/loadRMApps/... > etc > * there were some methods which were copied to LeveldbConfigurationStore > (ie: openDatabase, storeVersion, loadVersion, CompactionTimerClass nested > class), a helper class could be created to reduce the duplicated code > * Any other cleanup -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10166) Add detail log for ApplicationAttemptNotFoundException
[ https://issues.apache.org/jira/browse/YARN-10166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17072780#comment-17072780 ] Youquan Lin commented on YARN-10166: [~sunilg] [~adam.antal] can you take a look at this? > Add detail log for ApplicationAttemptNotFoundException > -- > > Key: YARN-10166 > URL: https://issues.apache.org/jira/browse/YARN-10166 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Reporter: Youquan Lin >Priority: Minor > Labels: patch > Attachments: YARN-10166-001.patch, YARN-10166-002.patch, > YARN-10166-003.patch, YARN-10166-004.patch > > > Suppose user A killed the app, then ApplicationMasterService will call > unregisterAttempt() for this app. Sometimes, app's AM continues to call the > alloate() method and reports an error as follows. > {code:java} > Application attempt appattempt_1582520281010_15271_01 doesn't exist in > ApplicationMasterService cache. > {code} > If user B has been watching the AM log, he will be confused why the > attempt is no longer in the ApplicationMasterService cache. So I think we can > add detail log for ApplicationAttemptNotFoundException as follows. > {code:java} > Application attempt appattempt_1582630210671_14658_01 doesn't exist in > ApplicationMasterService cache.App state: KILLED,finalStatus: KILLED > ,diagnostics: App application_1582630210671_14658 killed by userA from > 127.0.0.1 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10201) Make AMRMProxyPolicy aware of SC load
[ https://issues.apache.org/jira/browse/YARN-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17072667#comment-17072667 ] Hadoop QA commented on YARN-10201: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 49s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 20m 19s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 4s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 20s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 13s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 5 new + 241 unchanged - 0 fixed = 246 total (was 241) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 31s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 14s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 5s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 43s{color} | {color:red} hadoop-yarn-api in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 5m 59s{color} | {color:red} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 3m 35s{color} | {color:red} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 27m 9s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 92m 19s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 39s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}261m 4s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common | | |
[jira] [Commented] (YARN-6077) /bin/bash path is hardcoded in node manager
[ https://issues.apache.org/jira/browse/YARN-6077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17072647#comment-17072647 ] Mårten Lindblad commented on YARN-6077: --- Another option could be to use /usr/bin/env bash > /bin/bash path is hardcoded in node manager > --- > > Key: YARN-6077 > URL: https://issues.apache.org/jira/browse/YARN-6077 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Miklos Szegedi >Priority: Major > > We need a configuration entry similar to MRJobConfig.MAPRED_ADMIN_USER_SHELL > to support multiple environments like FreeBSD. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-10218) [GPG] Support HTTPS in GPG
Bilwa S T created YARN-10218: Summary: [GPG] Support HTTPS in GPG Key: YARN-10218 URL: https://issues.apache.org/jira/browse/YARN-10218 Project: Hadoop YARN Issue Type: Sub-task Reporter: Bilwa S T Assignee: Bilwa S T HTTPS support in Router is handled as part of Jira YARN-10120. Https Rest calls from GPG to Router must be supported -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-9355) RMContainerRequestor#makeRemoteRequest has confusing log message
[ https://issues.apache.org/jira/browse/YARN-9355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Ahuja reassigned YARN-9355: - Assignee: Umesh (was: Siddharth Ahuja) > RMContainerRequestor#makeRemoteRequest has confusing log message > > > Key: YARN-9355 > URL: https://issues.apache.org/jira/browse/YARN-9355 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Umesh >Priority: Trivial > Labels: newbie, newbie++ > > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor#makeRemoteRequest > has this log: > {code:java} > if (ask.size() > 0 || release.size() > 0) { > LOG.info("getResources() for " + applicationId + ":" + " ask=" > + ask.size() + " release= " + release.size() + " newContainers=" > + allocateResponse.getAllocatedContainers().size() > + " finishedContainers=" + numCompletedContainers > + " resourcelimit=" + availableResources + " knownNMs=" > + clusterNmCount); > } > {code} > The reason why "getResources()" is printed because > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator#getResources > invokes makeRemoteRequest. This is not too informative and error-prone as > name of getResources could change over time and the log will be outdated. > Moreover, it's not a good idea to print a method name from a method below the > current one in the stack. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10202) Fix documentation about NodeAttributes.
[ https://issues.apache.org/jira/browse/YARN-10202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17072469#comment-17072469 ] Hudson commented on YARN-10202: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18106 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18106/]) YARN-10202. Fix documentation about NodeAttributes. Contributed by Sen (aajisaka: rev c162648aff68552d87db8a013b850c17fee762c0) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeAttributes.md > Fix documentation about NodeAttributes. > --- > > Key: YARN-10202 > URL: https://issues.apache.org/jira/browse/YARN-10202 > Project: Hadoop YARN > Issue Type: Bug > Components: documentation >Affects Versions: 3.2.1 >Reporter: Sen Zhao >Assignee: Sen Zhao >Priority: Minor > Fix For: 3.3.0, 3.2.2 > > Attachments: YARN-10202.001.patch > > > {noformat:title=NodeAttributes.md} > The above SchedulingRequest requests for 1 container on nodes that must > satisfy following constraints: > 1. Node attribute *`rm.yarn.io/python`* doesn't exist on the node or it exist > but its value is not equal to 3 > 2. Node attribute *`rm.yarn.io/java`* must exist on the node and its value is > equal to 1.8 > {noformat} > should be > {noformat} > The above SchedulingRequest requests for 1 container on nodes that must > satisfy following constraints: > 1. Node attribute *`rm.yarn.io/python`* doesn't exist on the node or it exist > but its value is not equal to 3 > 2. Node attribute *`rm.yarn.io/java`* must exist on the node and its value is > equal to 1.8 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10201) Make AMRMProxyPolicy aware of SC load
[ https://issues.apache.org/jira/browse/YARN-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17072460#comment-17072460 ] Young Chen commented on YARN-10201: --- Attached a new patch addressing checkstyle, findbugs, licensing, etc. issues. [~bibinchundatt] let me know your thoughts when you have time. Thanks! > Make AMRMProxyPolicy aware of SC load > - > > Key: YARN-10201 > URL: https://issues.apache.org/jira/browse/YARN-10201 > Project: Hadoop YARN > Issue Type: Sub-task > Components: amrmproxy >Reporter: Young Chen >Assignee: Young Chen >Priority: Major > Attachments: YARN-10201.v0.patch, YARN-10201.v1.patch, > YARN-10201.v2.patch, YARN-10201.v3.patch, YARN-10201.v4.patch, > YARN-10201.v5.patch > > > LocalityMulticastAMRMProxyPolicy is currently unaware of SC load when > splitting resource requests. We propose changes to the policy so that it > receives feedback from SCs and can load balance requests across the federated > cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10201) Make AMRMProxyPolicy aware of SC load
[ https://issues.apache.org/jira/browse/YARN-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Young Chen updated YARN-10201: -- Attachment: YARN-10201.v5.patch > Make AMRMProxyPolicy aware of SC load > - > > Key: YARN-10201 > URL: https://issues.apache.org/jira/browse/YARN-10201 > Project: Hadoop YARN > Issue Type: Sub-task > Components: amrmproxy >Reporter: Young Chen >Assignee: Young Chen >Priority: Major > Attachments: YARN-10201.v0.patch, YARN-10201.v1.patch, > YARN-10201.v2.patch, YARN-10201.v3.patch, YARN-10201.v4.patch, > YARN-10201.v5.patch > > > LocalityMulticastAMRMProxyPolicy is currently unaware of SC load when > splitting resource requests. We propose changes to the policy so that it > receives feedback from SCs and can load balance requests across the federated > cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10120) In Federation Router Nodes/Applications/About pages throws 500 exception when https is enabled
[ https://issues.apache.org/jira/browse/YARN-10120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17072453#comment-17072453 ] Prabhu Joseph commented on YARN-10120: -- [~BilwaST] The patch [^YARN-10120.002.patch] looks good, +1. Will commit it tomorrow if no other comments. > In Federation Router Nodes/Applications/About pages throws 500 exception when > https is enabled > -- > > Key: YARN-10120 > URL: https://issues.apache.org/jira/browse/YARN-10120 > Project: Hadoop YARN > Issue Type: Bug > Components: federation >Reporter: Sushanta Sen >Assignee: Bilwa S T >Priority: Critical > Attachments: YARN-10120.001.patch, YARN-10120.002.patch > > > In Federation Router Nodes/Applications/About pages throws 500 exception when > https is enabled. > yarn.router.webapp.https.address =router ip:8091 > {noformat} > 2020-02-07 16:38:49,990 ERROR org.apache.hadoop.yarn.webapp.Dispatcher: error > handling URI: /cluster/apps > java.lang.reflect.InvocationTargetException > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:166) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287) > at > com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277) > at > com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:182) > at > com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91) > at > com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:941) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:875) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:829) > at > com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82) > at > com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:119) > at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:133) > at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:130) > at > com.google.inject.servlet.GuiceFilter$Context.call(GuiceFilter.java:203) > at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:130) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767) > at > org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767) > at > org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644) > at > org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767) > at > org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1622) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767) > at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at >
[jira] [Commented] (YARN-10217) Expired SampleStat should ignore when generating SlowPeersReport
[ https://issues.apache.org/jira/browse/YARN-10217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17072413#comment-17072413 ] Hadoop QA commented on YARN-10217: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 49s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 20m 13s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 32s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 30s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}111m 11s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 48s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}234m 8s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDecommissionWithStriped | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport | | | hadoop.hdfs.TestFileCreation | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:4454c6d14b7 | | JIRA Issue | YARN-10217 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12998391/YARN-10217-002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 916064f7ddd9 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c734d24 | | maven | version: Apache