[jira] [Commented] (YARN-6278) Enforce to use correct node and npm version in new YARN-UI build
[ https://issues.apache.org/jira/browse/YARN-6278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895572#comment-15895572 ] Sunil G commented on YARN-6278: --- Thanks [~leftnoteasy] for review and commit. Thanks [~Sreenath] for additional review. > Enforce to use correct node and npm version in new YARN-UI build > > > Key: YARN-6278 > URL: https://issues.apache.org/jira/browse/YARN-6278 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Sunil G >Assignee: Sunil G >Priority: Critical > Fix For: 3.0.0-alpha3 > > Attachments: YARN-6278.0001.patch > > > Link to the error is > [here|https://builds.apache.org/job/PreCommit-HDFS-Build/18535/artifact/patchprocess/patch-compile-root.txt] > {code} > qunit-notifications#0.1.1 bower_components/qunit-notifications > [INFO] > [INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui --- > /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node_modules/merge-trees/index.js:33 > class MergeTrees { > ^ > Unexpected reserved word > SyntaxError: Unexpected reserved word > at Module._compile (module.js:439:25) > at Object.Module._extensions..js (module.js:474:10) > at Module.load (module.js:356:32) > at Function.Module._load (module.js:312:12) > at Module.require (module.js:364:17) > at require (module.js:380:17) > at Object. > (/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node_modules/broccoli-merge-trees/index.js:2:18) > at Module._compile (module.js:456:26) > at Object.Module._extensions..js (module.js:474:10) > at Module.load (module.js:356:32) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6248) Killing an app with pending container requests leaves the user in UsersManager
[ https://issues.apache.org/jira/browse/YARN-6248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895568#comment-15895568 ] Sunil G commented on YARN-6248: --- Committing now > Killing an app with pending container requests leaves the user in UsersManager > -- > > Key: YARN-6248 > URL: https://issues.apache.org/jira/browse/YARN-6248 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: Eric Payne >Assignee: Eric Payne > Attachments: User Left Over.jpg, YARN-6248.001.patch > > > If an app is still asking for resources when it is killed, the user is left > in the UsersManager structure and shows up on the GUI. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6278) Enforce to use correct node and npm version in new YARN-UI build
[ https://issues.apache.org/jira/browse/YARN-6278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-6278: - Summary: Enforce to use correct node and npm version in new YARN-UI build (was: -Pyarn-ui build seems broken in trunk) > Enforce to use correct node and npm version in new YARN-UI build > > > Key: YARN-6278 > URL: https://issues.apache.org/jira/browse/YARN-6278 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Sunil G >Assignee: Sunil G >Priority: Critical > Attachments: YARN-6278.0001.patch > > > Link to the error is > [here|https://builds.apache.org/job/PreCommit-HDFS-Build/18535/artifact/patchprocess/patch-compile-root.txt] > {code} > qunit-notifications#0.1.1 bower_components/qunit-notifications > [INFO] > [INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui --- > /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node_modules/merge-trees/index.js:33 > class MergeTrees { > ^ > Unexpected reserved word > SyntaxError: Unexpected reserved word > at Module._compile (module.js:439:25) > at Object.Module._extensions..js (module.js:474:10) > at Module.load (module.js:356:32) > at Function.Module._load (module.js:312:12) > at Module.require (module.js:364:17) > at require (module.js:380:17) > at Object. > (/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node_modules/broccoli-merge-trees/index.js:2:18) > at Module._compile (module.js:456:26) > at Object.Module._extensions..js (module.js:474:10) > at Module.load (module.js:356:32) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6278) -Pyarn-ui build seems broken in trunk
[ https://issues.apache.org/jira/browse/YARN-6278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895564#comment-15895564 ] Wangda Tan commented on YARN-6278: -- +1 committing. > -Pyarn-ui build seems broken in trunk > - > > Key: YARN-6278 > URL: https://issues.apache.org/jira/browse/YARN-6278 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Sunil G >Assignee: Sunil G >Priority: Critical > Attachments: YARN-6278.0001.patch > > > Link to the error is > [here|https://builds.apache.org/job/PreCommit-HDFS-Build/18535/artifact/patchprocess/patch-compile-root.txt] > {code} > qunit-notifications#0.1.1 bower_components/qunit-notifications > [INFO] > [INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui --- > /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node_modules/merge-trees/index.js:33 > class MergeTrees { > ^ > Unexpected reserved word > SyntaxError: Unexpected reserved word > at Module._compile (module.js:439:25) > at Object.Module._extensions..js (module.js:474:10) > at Module.load (module.js:356:32) > at Function.Module._load (module.js:312:12) > at Module.require (module.js:364:17) > at require (module.js:380:17) > at Object. > (/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node_modules/broccoli-merge-trees/index.js:2:18) > at Module._compile (module.js:456:26) > at Object.Module._extensions..js (module.js:474:10) > at Module.load (module.js:356:32) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6207) Move application can fail when attempt add event is delayed
[ https://issues.apache.org/jira/browse/YARN-6207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895528#comment-15895528 ] Hadoop QA commented on YARN-6207: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 51s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 61m 46s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6207 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12856036/YARN-6207.008.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 453514891c44 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0f336ba | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/15172/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15172/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15172/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Move application can fail when attempt add event is delayed > > > Key: YARN-6207 > URL: https://issues.apache.org/jira/browse/YARN-6207 >
[jira] [Commented] (YARN-6256) Add FROM_ID info key for timeline entities in reader response.
[ https://issues.apache.org/jira/browse/YARN-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895518#comment-15895518 ] Rohith Sharma K S commented on YARN-6256: - bq. But if you specify that as the fromId, that entity is returned again in the next call because the fromId is inclusive, right? Then how does UI deal with that? Does it ask for 51 entities in the next page and drop the first (redundant) one? Am I misunderstanding this? Is making it precise (e.g. making fromId exclusive) feasible? As far as I discussed with UI experts earlier, they are fine with displaying one entity of previous page. And also they use this behavior for identifying last page. It is fixed behavior in ATS1/1.5 . Since behavior is defined I think it should not be an issue. bq. l.257: I thought the check for single entity read was intentional (I guess this was introduced by YARN-4237). Do we no longer need it? there is no behavior change here, even I surprised why singleEntityReader check required.!! > Add FROM_ID info key for timeline entities in reader response. > --- > > Key: YARN-6256 > URL: https://issues.apache.org/jira/browse/YARN-6256 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6256-YARN-5355.0001.patch, > YARN-6256-YARN-5355.0002.patch > > > It is continuation with YARN-6027 to add FROM_ID key in all other timeline > entity responses which includes > # Flow run entity response. > # Application entity response > # Generic timeline entity response - Here we need to retrospect on idprefix > filter which is now separately provided. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6207) Move application can fail when attempt add event is delayed
[ https://issues.apache.org/jira/browse/YARN-6207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bibin A Chundatt updated YARN-6207: --- Attachment: YARN-6207.008.patch Thank you [~rohithsharma] for review. Attaching new patch after handling checkstyle > Move application can fail when attempt add event is delayed > > > Key: YARN-6207 > URL: https://issues.apache.org/jira/browse/YARN-6207 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Reporter: Bibin A Chundatt >Assignee: Bibin A Chundatt > Attachments: YARN-6207.001.patch, YARN-6207.002.patch, > YARN-6207.003.patch, YARN-6207.004.patch, YARN-6207.005.patch, > YARN-6207.006.patch, YARN-6207.007.patch, YARN-6207.008.patch > > > *Steps to reproduce* > 1.Submit application and delay attempt add to Scheduler > (Simulate using debug at EventDispatcher for SchedulerEventDispatcher) > 2. Call move application to destination queue. > {noformat} > Caused by: > org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): > java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.preValidateMoveApplication(CapacityScheduler.java:2086) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.moveApplicationAcrossQueue(RMAppManager.java:669) > at > org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.moveApplicationAcrossQueues(ClientRMService.java:1231) > at > org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.moveApplicationAcrossQueues(ApplicationClientProtocolPBServiceImpl.java:388) > at > org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:537) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:522) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:867) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:813) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1892) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2659) > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1483) > at org.apache.hadoop.ipc.Client.call(Client.java:1429) > at org.apache.hadoop.ipc.Client.call(Client.java:1339) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:115) > at com.sun.proxy.$Proxy7.moveApplicationAcrossQueues(Unknown Source) > at > org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.moveApplicationAcrossQueues(ApplicationClientProtocolPBClientImpl.java:398) > ... 16 more > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6059) Update paused container state in the state store
[ https://issues.apache.org/jira/browse/YARN-6059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895504#comment-15895504 ] Hadoop QA commented on YARN-6059: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 39s{color} | {color:green} YARN-5972 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} YARN-5972 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} YARN-5972 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} YARN-5972 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 29s{color} | {color:green} YARN-5972 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} YARN-5972 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} YARN-5972 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 34s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 12 new + 363 unchanged - 2 fixed = 375 total (was 365) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 32s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 1 new + 235 unchanged - 0 fixed = 236 total (was 235) {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 5s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6059 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12856033/YARN-6059-YARN-5972.009.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 3e08ed7171ce 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-5972 / 3a68608 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15171/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | javadoc | https://builds.apache.org/job/PreCommit-YARN-Build/15171/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | unit |
[jira] [Updated] (YARN-6059) Update paused container state in the state store
[ https://issues.apache.org/jira/browse/YARN-6059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hitesh Sharma updated YARN-6059: Attachment: YARN-6059-YARN-5972.009.patch > Update paused container state in the state store > > > Key: YARN-6059 > URL: https://issues.apache.org/jira/browse/YARN-6059 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Hitesh Sharma >Assignee: Hitesh Sharma > Attachments: YARN-5216-YARN-6059.001.patch, > YARN-6059-YARN-5972.001.patch, YARN-6059-YARN-5972.002.patch, > YARN-6059-YARN-5972.003.patch, YARN-6059-YARN-5972.004.patch, > YARN-6059-YARN-5972.005.patch, YARN-6059-YARN-5972.006.patch, > YARN-6059-YARN-5972.007.patch, YARN-6059-YARN-5972.008.patch, > YARN-6059-YARN-5972.009.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6059) Update paused container state in the state store
[ https://issues.apache.org/jira/browse/YARN-6059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895484#comment-15895484 ] Hitesh Sharma commented on YARN-6059: - Thanks [~kkaranasos] for the excellent feedback! Sorry for the delay in getting back as well. I have resolved the comments in the latest patch. Unfortunately the logs for style check and java doc are purged out. I will look at the results of the latest patch and resolve them. > Update paused container state in the state store > > > Key: YARN-6059 > URL: https://issues.apache.org/jira/browse/YARN-6059 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Hitesh Sharma >Assignee: Hitesh Sharma > Attachments: YARN-5216-YARN-6059.001.patch, > YARN-6059-YARN-5972.001.patch, YARN-6059-YARN-5972.002.patch, > YARN-6059-YARN-5972.003.patch, YARN-6059-YARN-5972.004.patch, > YARN-6059-YARN-5972.005.patch, YARN-6059-YARN-5972.006.patch, > YARN-6059-YARN-5972.007.patch, YARN-6059-YARN-5972.008.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6285) Add option to set max limit on ResourceManager for ApplicationClientProtocol.getApplications
[ https://issues.apache.org/jira/browse/YARN-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895474#comment-15895474 ] Benoy Antony commented on YARN-6285: +1. [~sunilg], If you have time, could you please review? If there are no further review comments, I can commit this on Tuesday. > Add option to set max limit on ResourceManager for > ApplicationClientProtocol.getApplications > > > Key: YARN-6285 > URL: https://issues.apache.org/jira/browse/YARN-6285 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: yunjiong zhao >Assignee: yunjiong zhao > Attachments: YARN-6285.001.patch, YARN-6285.002.patch, > YARN-6285.003.patch > > > When users called ApplicationClientProtocol.getApplications, it will return > lots of data, and generate lots of garbage on ResourceManager which caused > long time GC. > For example, on one of our RM, when called rest API " http:// address:port>/ws/v1/cluster/apps" it can return 150MB data which have 944 > applications. > getApplications have limit parameter, but some user might not set it, and > then the limit will be Long.MAX_VALUE. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6285) Add option to set max limit on ResourceManager for ApplicationClientProtocol.getApplications
[ https://issues.apache.org/jira/browse/YARN-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895449#comment-15895449 ] Hadoop QA commented on YARN-6285: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 17s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 7m 33s{color} | {color:red} hadoop-yarn in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 6m 14s{color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 14s{color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 29s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 47s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 99m 46s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6285 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12856015/YARN-6285.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux f2fbdc7890ee 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 279d187 | | Default Java | 1.8.0_121 | | compile | https://builds.apache.org/job/PreCommit-YARN-Build/15170/artifact/patchprocess/branch-compile-hadoop-yarn-project_hadoop-yarn.txt | | findbugs | v3.0.0 | |
[jira] [Updated] (YARN-6285) Add option to set max limit on ResourceManager for ApplicationClientProtocol.getApplications
[ https://issues.apache.org/jira/browse/YARN-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] yunjiong zhao updated YARN-6285: Attachment: YARN-6285.003.patch Update patch according to [~benoyantony] comments. Set default value to Long.MAX_VALUE, so by default, it changes nothing. Thanks [~benoyantony] for your time. > Add option to set max limit on ResourceManager for > ApplicationClientProtocol.getApplications > > > Key: YARN-6285 > URL: https://issues.apache.org/jira/browse/YARN-6285 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: yunjiong zhao >Assignee: yunjiong zhao > Attachments: YARN-6285.001.patch, YARN-6285.002.patch, > YARN-6285.003.patch > > > When users called ApplicationClientProtocol.getApplications, it will return > lots of data, and generate lots of garbage on ResourceManager which caused > long time GC. > For example, on one of our RM, when called rest API " http:// address:port>/ws/v1/cluster/apps" it can return 150MB data which have 944 > applications. > getApplications have limit parameter, but some user might not set it, and > then the limit will be Long.MAX_VALUE. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6275) Fail to show real-time tracking charts in SLS
[ https://issues.apache.org/jira/browse/YARN-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895334#comment-15895334 ] Hadoop QA commented on YARN-6275: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 13s{color} | {color:green} The patch generated 0 new + 98 unchanged - 1 fixed = 98 total (was 99) {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 12s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s{color} | {color:green} hadoop-sls in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 36s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6275 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12856009/YARN-6275.004.patch | | Optional Tests | asflicense mvnsite unit shellcheck shelldocs compile javac javadoc mvninstall findbugs checkstyle | | uname | Linux 40335b9381b3 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 279d187 | | Default Java | 1.8.0_121 | | shellcheck | v0.4.5 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15169/testReport/ | | modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15169/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fail to show real-time tracking charts in SLS > - > > Key: YARN-6275 > URL:
[jira] [Commented] (YARN-6285) Add option to set max limit on ResourceManager for ApplicationClientProtocol.getApplications
[ https://issues.apache.org/jira/browse/YARN-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895331#comment-15895331 ] Hadoop QA commented on YARN-6285: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 56s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 7m 27s{color} | {color:red} hadoop-yarn in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 5m 6s{color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 5m 6s{color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 39s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 3s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}111m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.yarn.server.resourcemanager.reservation.TestFairSchedulerPlanFollower | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6285 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855993/YARN-6285.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux aec59f13cd39 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8db7a8c | | Default Java | 1.8.0_121 | | compile |
[jira] [Updated] (YARN-6275) Fail to show real-time tracking charts in SLS
[ https://issues.apache.org/jira/browse/YARN-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6275: --- Attachment: YARN-6275.004.patch Uploaded v4 to solve shell style issue. > Fail to show real-time tracking charts in SLS > - > > Key: YARN-6275 > URL: https://issues.apache.org/jira/browse/YARN-6275 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6275.001.patch, YARN-6275.002.patch, > YARN-6275.003.patch, YARN-6275.004.patch > > > # Not put {{html}} directory under the current working directory. > # There is a bug in Class {{SLSWebApp}}, here is the stack trace: > {code} > java.lang.NullPointerException > at > org.eclipse.jetty.server.handler.ResourceHandler.handle(ResourceHandler.java:499) > at org.apache.hadoop.yarn.sls.web.SLSWebApp$1.handle(SLSWebApp.java:152) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:524) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6275) Fail to show real-time tracking charts in SLS
[ https://issues.apache.org/jira/browse/YARN-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895294#comment-15895294 ] Hadoop QA commented on YARN-6275: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 1 new + 98 unchanged - 1 fixed = 99 total (was 99) {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 10s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 53s{color} | {color:green} hadoop-sls in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 20m 54s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6275 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12856001/YARN-6275.003.patch | | Optional Tests | asflicense mvnsite unit shellcheck shelldocs compile javac javadoc mvninstall findbugs checkstyle | | uname | Linux 801325ed2736 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 279d187 | | Default Java | 1.8.0_121 | | shellcheck | v0.4.5 | | findbugs | v3.0.0 | | shellcheck | https://builds.apache.org/job/PreCommit-YARN-Build/15168/artifact/patchprocess/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15168/testReport/ | | modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15168/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fail to show real-time tracking
[jira] [Comment Edited] (YARN-6285) Add option to set max limit on ResourceManager for ApplicationClientProtocol.getApplications
[ https://issues.apache.org/jira/browse/YARN-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895289#comment-15895289 ] Benoy Antony edited comment on YARN-6285 at 3/4/17 12:05 AM: - Thanks for the patch, [~zhaoyunjiong]. Few comments: 1. {code} public static final long DEFAULT_RM_MAX_LIMIT_GET_APPLICATIONS = DEFAULT_RM_MAX_COMPLETED_APPLICATIONS; {code} I think , it's better to keep the _DEFAULT_RM_MAX_LIMIT_GET_APPLICATIONS_ independent of _DEFAULT_RM_MAX_COMPLETED_APPLICATIONS_. If we need to maintain backward compatibility, the default should be Integer.MAX_VALUE. I personally think that it should be set to 1000. 2. Please indicate the default value in the property description. 3. {code} LOG.info("User " + callerUGI.getUserName() + " called getApplications with limit=" + limit); {code} Please indicate that the value is changed to the the max value. {code} LOG.info("User " + callerUGI.getUserName() + " called getApplications with limit=" + limit + ". Changing it to " + maxLimitGetApplications); {code} 4. {code} protected void setMaxLimitGetApplications(long limit) {code} Can it be package access instead of protected ? was (Author: benoyantony): Thanks for the patch, [~zhaoyunjiong]. Few comments: 1. {code} public static final long DEFAULT_RM_MAX_LIMIT_GET_APPLICATIONS = DEFAULT_RM_MAX_COMPLETED_APPLICATIONS; {code} I think , it's better to keep the _DEFAULT_RM_MAX_LIMIT_GET_APPLICATIONS_ independent of _DEFAULT_RM_MAX_COMPLETED_APPLICATIONS_. If we need to maintain backward compatibility, the default should be Integer.MAX_VALUE. I personally think that it should be set to 1000. 2. Please indicate the default value in the property description. 3. {code} LOG.info("User " + callerUGI.getUserName() + " called getApplications with limit=" + limit); {code} Please indicate that the value is changed to the the max value. LOG.info("User " + callerUGI.getUserName() + " called getApplications with limit=" + limit + ". Changing it to " + maxLimitGetApplications); {code} 4. {code} protected void setMaxLimitGetApplications(long limit) {code} Can it be package access instead of protected ? > Add option to set max limit on ResourceManager for > ApplicationClientProtocol.getApplications > > > Key: YARN-6285 > URL: https://issues.apache.org/jira/browse/YARN-6285 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: yunjiong zhao >Assignee: yunjiong zhao > Attachments: YARN-6285.001.patch, YARN-6285.002.patch > > > When users called ApplicationClientProtocol.getApplications, it will return > lots of data, and generate lots of garbage on ResourceManager which caused > long time GC. > For example, on one of our RM, when called rest API " http:// address:port>/ws/v1/cluster/apps" it can return 150MB data which have 944 > applications. > getApplications have limit parameter, but some user might not set it, and > then the limit will be Long.MAX_VALUE. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6285) Add option to set max limit on ResourceManager for ApplicationClientProtocol.getApplications
[ https://issues.apache.org/jira/browse/YARN-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895289#comment-15895289 ] Benoy Antony commented on YARN-6285: Thanks for the patch, [~zhaoyunjiong]. Few comments: 1. {code} public static final long DEFAULT_RM_MAX_LIMIT_GET_APPLICATIONS = DEFAULT_RM_MAX_COMPLETED_APPLICATIONS; {code} I think , it's better to keep the _DEFAULT_RM_MAX_LIMIT_GET_APPLICATIONS_ independent of _DEFAULT_RM_MAX_COMPLETED_APPLICATIONS_. If we need to maintain backward compatibility, the default should be Integer.MAX_VALUE. I personally think that it should be set to 1000. 2. Please indicate the default value in the property description. 3. {code} LOG.info("User " + callerUGI.getUserName() + " called getApplications with limit=" + limit); {code} Please indicate that the value is changed to the the max value. LOG.info("User " + callerUGI.getUserName() + " called getApplications with limit=" + limit + ". Changing it to " + maxLimitGetApplications); {code} 4. {code} protected void setMaxLimitGetApplications(long limit) {code} Can it be package access instead of protected ? > Add option to set max limit on ResourceManager for > ApplicationClientProtocol.getApplications > > > Key: YARN-6285 > URL: https://issues.apache.org/jira/browse/YARN-6285 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: yunjiong zhao >Assignee: yunjiong zhao > Attachments: YARN-6285.001.patch, YARN-6285.002.patch > > > When users called ApplicationClientProtocol.getApplications, it will return > lots of data, and generate lots of garbage on ResourceManager which caused > long time GC. > For example, on one of our RM, when called rest API " http:// address:port>/ws/v1/cluster/apps" it can return 150MB data which have 944 > applications. > getApplications have limit parameter, but some user might not set it, and > then the limit will be Long.MAX_VALUE. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6275) Fail to show real-time tracking charts in SLS
[ https://issues.apache.org/jira/browse/YARN-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6275: --- Attachment: YARN-6275.003.patch Uploaded v3 to solve shell style issue. > Fail to show real-time tracking charts in SLS > - > > Key: YARN-6275 > URL: https://issues.apache.org/jira/browse/YARN-6275 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6275.001.patch, YARN-6275.002.patch, > YARN-6275.003.patch > > > # Not put {{html}} directory under the current working directory. > # There is a bug in Class {{SLSWebApp}}, here is the stack trace: > {code} > java.lang.NullPointerException > at > org.eclipse.jetty.server.handler.ResourceHandler.handle(ResourceHandler.java:499) > at org.apache.hadoop.yarn.sls.web.SLSWebApp$1.handle(SLSWebApp.java:152) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:524) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6270) WebUtils.getRMWebAppURLWithScheme() needs to honor RM HA setting
[ https://issues.apache.org/jira/browse/YARN-6270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895241#comment-15895241 ] Hudson commented on YARN-6270: -- FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #11345 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11345/]) YARN-6270. WebUtils.getRMWebAppURLWithScheme() needs to honor RM HA (jianhe: rev 279d187f723d01658ef8698a29263652e2a05618) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/WebAppUtils.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfiguration.java > WebUtils.getRMWebAppURLWithScheme() needs to honor RM HA setting > > > Key: YARN-6270 > URL: https://issues.apache.org/jira/browse/YARN-6270 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Sumana Sathish >Assignee: Xuan Gong > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: YARN-6270.1.patch > > > yarn log cli: yarn logs -applicationId application_1488441635386_0005 -am 1 > failed with the connection exception when HA is enabled > {code} > Unable to get AM container informations for the > application:application_1488441635386_0005 > java.net.ConnectException: Connection refused (Connection refused) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6256) Add FROM_ID info key for timeline entities in reader response.
[ https://issues.apache.org/jira/browse/YARN-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895239#comment-15895239 ] Sangjin Lee commented on YARN-6256: --- (TimelineReaderWebServices.java) - l.270: need a space between "fromId." and "fromId" (the same for a few other places in this file) (FlowRunEntityReader.java) - l.257: I thought the check for single entity read was intentional (I guess this was introduced by YARN-4237). Do we no longer need it? cc [~varun_saxena] One high level question. How would the UI use the fromId to render itself? For example, if you ask for 50 entities for the first page, UI would pick the last (50th) in that set for the fromId for the next page, right? But if you specify that as the fromId, that entity is returned again in the next call because the fromId is inclusive, right? Then how does UI deal with that? Does it ask for 51 entities in the next page and drop the first (redundant) one? Am I misunderstanding this? Is making it precise (e.g. making fromId exclusive) feasible? > Add FROM_ID info key for timeline entities in reader response. > --- > > Key: YARN-6256 > URL: https://issues.apache.org/jira/browse/YARN-6256 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6256-YARN-5355.0001.patch, > YARN-6256-YARN-5355.0002.patch > > > It is continuation with YARN-6027 to add FROM_ID key in all other timeline > entity responses which includes > # Flow run entity response. > # Application entity response > # Generic timeline entity response - Here we need to retrospect on idprefix > filter which is now separately provided. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6282) Recreate interceptor chain for different attemptId in the same node in AMRMProxy
[ https://issues.apache.org/jira/browse/YARN-6282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895240#comment-15895240 ] Hadoop QA commented on YARN-6282: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 2s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 31m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6282 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855989/YARN-6282.v3.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c0e223a5be06 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8db7a8c | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15167/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15167/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Recreate interceptor chain for different attemptId in the same node in > AMRMProxy > > > Key: YARN-6282 > URL: https://issues.apache.org/jira/browse/YARN-6282 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6282.v1.patch, YARN-6282.v2.patch, > YARN-6282.v3.patch > > >
[jira] [Updated] (YARN-6270) WebUtils.getRMWebAppURLWithScheme() needs to honor RM HA setting
[ https://issues.apache.org/jira/browse/YARN-6270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-6270: -- Target Version/s: 2.8.0, 3.0.0-alpha3 (was: 2.8.0) Fix Version/s: 3.0.0-alpha3 > WebUtils.getRMWebAppURLWithScheme() needs to honor RM HA setting > > > Key: YARN-6270 > URL: https://issues.apache.org/jira/browse/YARN-6270 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Sumana Sathish >Assignee: Xuan Gong > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: YARN-6270.1.patch > > > yarn log cli: yarn logs -applicationId application_1488441635386_0005 -am 1 > failed with the connection exception when HA is enabled > {code} > Unable to get AM container informations for the > application:application_1488441635386_0005 > java.net.ConnectException: Connection refused (Connection refused) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6275) Fail to show real-time tracking charts in SLS
[ https://issues.apache.org/jira/browse/YARN-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895225#comment-15895225 ] Hadoop QA commented on YARN-6275: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 11s{color} | {color:red} The patch generated 3 new + 98 unchanged - 1 fixed = 101 total (was 99) {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 13s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 2s{color} | {color:green} hadoop-sls in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6275 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855991/YARN-6275.002.patch | | Optional Tests | asflicense mvnsite unit shellcheck shelldocs compile javac javadoc mvninstall findbugs checkstyle | | uname | Linux d0d00f15f3bd 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8db7a8c | | Default Java | 1.8.0_121 | | shellcheck | v0.4.5 | | findbugs | v3.0.0 | | shellcheck | https://builds.apache.org/job/PreCommit-YARN-Build/15165/artifact/patchprocess/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15165/testReport/ | | modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15165/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fail to show real-time tracking
[jira] [Commented] (YARN-6270) WebUtils.getRMWebAppURLWithScheme() needs to honor RM HA setting
[ https://issues.apache.org/jira/browse/YARN-6270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895202#comment-15895202 ] Hadoop QA commented on YARN-6270: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 22s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 13s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6270 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855708/YARN-6270.1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 3972c3c3d565 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8db7a8c | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15164/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15164/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > WebUtils.getRMWebAppURLWithScheme() needs to honor RM HA setting > > > Key: YARN-6270 > URL: https://issues.apache.org/jira/browse/YARN-6270 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Sumana Sathish >Assignee: Xuan Gong > Attachments: YARN-6270.1.patch > > > yarn log cli: yarn logs -applicationId application_1488441635386_0005 -am 1 > failed with the connection exception when HA is enabled > {code} > Unable to get AM container informations for
[jira] [Comment Edited] (YARN-6285) Add option to set max limit on ResourceManager for ApplicationClientProtocol.getApplications
[ https://issues.apache.org/jira/browse/YARN-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895190#comment-15895190 ] yunjiong zhao edited comment on YARN-6285 at 3/3/17 10:54 PM: -- Fix checkstyle. Failed unit test in TestRMRestart is not related. was (Author: zhaoyunjiong): Fix checkstyle. Failure unit test is not related. > Add option to set max limit on ResourceManager for > ApplicationClientProtocol.getApplications > > > Key: YARN-6285 > URL: https://issues.apache.org/jira/browse/YARN-6285 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: yunjiong zhao >Assignee: yunjiong zhao > Attachments: YARN-6285.001.patch, YARN-6285.002.patch > > > When users called ApplicationClientProtocol.getApplications, it will return > lots of data, and generate lots of garbage on ResourceManager which caused > long time GC. > For example, on one of our RM, when called rest API " http:// address:port>/ws/v1/cluster/apps" it can return 150MB data which have 944 > applications. > getApplications have limit parameter, but some user might not set it, and > then the limit will be Long.MAX_VALUE. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6285) Add option to set max limit on ResourceManager for ApplicationClientProtocol.getApplications
[ https://issues.apache.org/jira/browse/YARN-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] yunjiong zhao updated YARN-6285: Attachment: YARN-6285.002.patch Fix checkstyle. Failure unit test is not related. > Add option to set max limit on ResourceManager for > ApplicationClientProtocol.getApplications > > > Key: YARN-6285 > URL: https://issues.apache.org/jira/browse/YARN-6285 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: yunjiong zhao >Assignee: yunjiong zhao > Attachments: YARN-6285.001.patch, YARN-6285.002.patch > > > When users called ApplicationClientProtocol.getApplications, it will return > lots of data, and generate lots of garbage on ResourceManager which caused > long time GC. > For example, on one of our RM, when called rest API " http:// address:port>/ws/v1/cluster/apps" it can return 150MB data which have 944 > applications. > getApplications have limit parameter, but some user might not set it, and > then the limit will be Long.MAX_VALUE. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6271) yarn rmadin -getGroups returns information from standby RM
[ https://issues.apache.org/jira/browse/YARN-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895186#comment-15895186 ] Hudson commented on YARN-6271: -- FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #11344 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11344/]) YARN-6271. yarn rmadin -getGroups returns information from standby RM. (junping_du: rev 8db7a8c3aea3d989361f32cca5b271e9653773b6) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java > yarn rmadin -getGroups returns information from standby RM > -- > > Key: YARN-6271 > URL: https://issues.apache.org/jira/browse/YARN-6271 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Reporter: Sumana Sathish >Assignee: Jian He >Priority: Critical > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: YARN-6271.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6275) Fail to show real-time tracking charts in SLS
[ https://issues.apache.org/jira/browse/YARN-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6275: --- Attachment: YARN-6275.002.patch Thanks [~rkanter] for the review. Uploaded patch v2 for you comment. > Fail to show real-time tracking charts in SLS > - > > Key: YARN-6275 > URL: https://issues.apache.org/jira/browse/YARN-6275 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6275.001.patch, YARN-6275.002.patch > > > # Not put {{html}} directory under the current working directory. > # There is a bug in Class {{SLSWebApp}}, here is the stack trace: > {code} > java.lang.NullPointerException > at > org.eclipse.jetty.server.handler.ResourceHandler.handle(ResourceHandler.java:499) > at org.apache.hadoop.yarn.sls.web.SLSWebApp$1.handle(SLSWebApp.java:152) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:524) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6282) Recreate interceptor chain for different attemptId in the same node in AMRMProxy
[ https://issues.apache.org/jira/browse/YARN-6282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Botong Huang updated YARN-6282: --- Attachment: YARN-6282.v3.patch > Recreate interceptor chain for different attemptId in the same node in > AMRMProxy > > > Key: YARN-6282 > URL: https://issues.apache.org/jira/browse/YARN-6282 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6282.v1.patch, YARN-6282.v2.patch, > YARN-6282.v3.patch > > > In AMRMProxy, an interceptor chain is created per application attempt. But > the pipeline mapping uses application Id as key. So when a different attempt > comes in the same node, we need to recreate the interceptor chain for it, > instead of using the existing one. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6165) Intra-queue preemption occurs even when preemption is turned off for a specific queue.
[ https://issues.apache.org/jira/browse/YARN-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Payne updated YARN-6165: - Attachment: YARN-6165.001.patch {{IntraQueueCandidatesSelector#selectCandidates}} was not checking the enable/disable status of each queue prior to calculating the {{resToObtainByPartition}}. To test this, I manually modified each of the tests in {{TestProportionalCapacityPreemptionPolicyIntraQueue}} to disable preemption on all of the test queues. Without this fix, the tests passed when they should have failed. Meaning, with the preemption disabled on all of the test queues, the tests continued to select containers for preemption. However, when I added this fix, those modified tests started to fail. [~sunilg], [~leftnoteasy], and [~jlowe], any comments would be greatly appreciated. > Intra-queue preemption occurs even when preemption is turned off for a > specific queue. > -- > > Key: YARN-6165 > URL: https://issues.apache.org/jira/browse/YARN-6165 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler, scheduler preemption >Affects Versions: 3.0.0-alpha2 >Reporter: Eric Payne >Assignee: Eric Payne > Attachments: YARN-6165.001.patch > > > Intra-queue preemption occurs even when preemption is turned on for the whole > cluster ({{yarn.resourcemanager.scheduler.monitor.enable == true}}) but > turned off for a specific queue > ({{yarn.scheduler.capacity.root.queue1.disable_preemption == true}}). -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6271) yarn rmadin -getGroups returns information from standby RM
[ https://issues.apache.org/jira/browse/YARN-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895162#comment-15895162 ] Junping Du commented on YARN-6271: -- Two test failures are not related to patch and exists for long time. +1. Committing. > yarn rmadin -getGroups returns information from standby RM > -- > > Key: YARN-6271 > URL: https://issues.apache.org/jira/browse/YARN-6271 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Reporter: Sumana Sathish >Assignee: Jian He >Priority: Critical > Attachments: YARN-6271.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6282) Recreate interceptor chain for different attemptId in the same node in AMRMProxy
[ https://issues.apache.org/jira/browse/YARN-6282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895160#comment-15895160 ] Hadoop QA commented on YARN-6282: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 31s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 42m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.nodemanager.containermanager.TestContainerManager | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6282 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855975/YARN-6282.v2.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux b96121d736e8 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ac5ae00 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/15162/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15162/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15162/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Recreate interceptor chain for different attemptId in the same node in > AMRMProxy > > > Key: YARN-6282 > URL:
[jira] [Updated] (YARN-5881) Enable configuration of queue capacity in terms of absolute resources
[ https://issues.apache.org/jira/browse/YARN-5881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-5881: - Attachment: YARN-5881.Support.Absolute.Min.Max.Resource.In.Capacity.Scheduler.design-doc.v1.pdf Thanks [~seanpo03] for filing this ticket and comments from [~curino]! Attached a design doc done by [~vinodkv]/[~sunilg]/[~jianhe] and I. It should cover all questions from Carlo. Please feel free to share your thoughts. [~seanpo03] can I take over the ticket if you're not on it yet? > Enable configuration of queue capacity in terms of absolute resources > - > > Key: YARN-5881 > URL: https://issues.apache.org/jira/browse/YARN-5881 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Sean Po >Assignee: Sean Po > Attachments: > YARN-5881.Support.Absolute.Min.Max.Resource.In.Capacity.Scheduler.design-doc.v1.pdf > > > Currently, Yarn RM supports the configuration of queue capacity in terms of a > proportion to cluster capacity. In the context of Yarn being used as a public > cloud service, it makes more sense if queues can be configured absolutely. > This will allow administrators to set usage limits more concretely and > simplify customer expectations for cluster allocation. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6281) Cleanup when AMRMProxy fails to initialize a new interceptor chain
[ https://issues.apache.org/jira/browse/YARN-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895153#comment-15895153 ] Hadoop QA commented on YARN-6281: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 51s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 31s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6281 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855974/YARN-6281.v2.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux edb017513be6 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ac5ae00 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15161/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15161/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Cleanup when AMRMProxy fails to initialize a new interceptor chain > -- > > Key: YARN-6281 > URL: https://issues.apache.org/jira/browse/YARN-6281 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority:
[jira] [Commented] (YARN-4266) Allow whitelisted users to disable user re-mapping/squashing when launching docker containers
[ https://issues.apache.org/jira/browse/YARN-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895132#comment-15895132 ] Eric Badger commented on YARN-4266: --- [~templedf], I think you're misunderstanding what I was saying. When we instantiate the {{DockerRunCommand}}, we give it a user to run as. In this user remapping case, we've been setting that user to root so that we have permissions to perform the usermod remapping in the container. I'm saying, however, that we could set that use to the UID and GID of the user that submitted the job (i.e. {{runAsUser}} in the code). So let's say user foo:1000:1000 submitted a job. The NM will create the launch_container.sh script assuming user foo. Then we will go to launch the docker container. When we instantiate the {{DockerRunCommand}}, we would pass it the output of {{id -u}} and {{id -g}}. This sort of thing is already being done in [~tangzhankun]'s patch to get {{targetUID}}. The result would give us a command that looks like: {{docker run --user=1000:1000 ...}}. There isn't a security hole here that I can see because the user in the container will have the same UID/GID as the user that submitted the job. Inside the container, the username associated with the UID doesn't really matter. Outside of the container, everything written by the user in the container will have the same UID. A downside would be that the username inside of the container isn't meaningful and could be potentially very misleading to those who are unaware of how this is all being done. I've tested the --user=UID:GID option locally on a single-node cluster and have been successful. Files/logs/etc. written in the container using are owned by the user that submitted the job, which is the UID:GID given in the --user option (foo:1000:1000 in the example above). There also isn't a problem with usernames being numbers (which the image could map to arbitrary UID/GIDs) because docker interprets all numbers in the --user option as UID/GIDs. I tested this locally to make sure. So even if there is a user named "2000" (with UID != 2000), the command {{docker run --user=2000}} will create a new user with UID 2000. > Allow whitelisted users to disable user re-mapping/squashing when launching > docker containers > - > > Key: YARN-4266 > URL: https://issues.apache.org/jira/browse/YARN-4266 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Sidharta Seethana >Assignee: Zhankun Tang > Attachments: YARN-4266.001.patch, > YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping.pdf, > YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v2.pdf, > YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v3.pdf, > YARN-4266-branch-2.8.001.patch > > > Docker provides a mechanism (the --user switch) that enables us to specify > the user the container processes should run as. We use this mechanism today > when launching docker containers . In non-secure mode, we run the docker > container based on > `yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user` and in > secure mode, as the submitting user. However, this mechanism breaks down with > a large number of 'pre-created' images which don't necessarily have the users > available within the image. Examples of such images include shared images > that need to be used by multiple users. We need a way in which we can allow a > pre-defined set of users to run containers based on existing images, without > using the --user switch. There are some implications of disabling this user > squashing that we'll need to work through : log aggregation, artifact > deletion etc., -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6285) Add option to set max limit on ResourceManager for ApplicationClientProtocol.getApplications
[ https://issues.apache.org/jira/browse/YARN-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895097#comment-15895097 ] Hadoop QA commented on YARN-6285: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 49s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 11s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 7m 15s{color} | {color:red} hadoop-yarn in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 5m 59s{color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 5m 59s{color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 52s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 283 unchanged - 0 fixed = 284 total (was 283) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 34s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 41m 30s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 98m 48s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6285 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855958/YARN-6285.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 0d3f18d0ef98 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 490abfb | | Default Java | 1.8.0_121 | | compile |
[jira] [Commented] (YARN-6218) Fix TestAMRMClient when using FairScheduler
[ https://issues.apache.org/jira/browse/YARN-6218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895089#comment-15895089 ] Hudson commented on YARN-6218: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11343 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11343/]) YARN-6218. Fix TestAMRMClient when using FairScheduler. (Miklos Szegedi (rchiang: rev 2148b83993fd8ce73bcbc7677c57ee5028a59cd4) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java > Fix TestAMRMClient when using FairScheduler > --- > > Key: YARN-6218 > URL: https://issues.apache.org/jira/browse/YARN-6218 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi >Priority: Minor > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: YARN-6218.000.patch, YARN-6218.001.patch, > YARN-6218.002.patch > > > We ran into this issue on v2. Allocation does not happen in the specified > amount of time. > Error Message > expected:<2> but was:<0> > Stacktrace > java.lang.AssertionError: expected:<2> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.testAMRMClientMatchStorage(TestAMRMClient.java:495) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6282) Recreate interceptor chain for different attemptId in the same node in AMRMProxy
[ https://issues.apache.org/jira/browse/YARN-6282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Botong Huang updated YARN-6282: --- Attachment: YARN-6282.v2.patch > Recreate interceptor chain for different attemptId in the same node in > AMRMProxy > > > Key: YARN-6282 > URL: https://issues.apache.org/jira/browse/YARN-6282 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6282.v1.patch, YARN-6282.v2.patch > > > In AMRMProxy, an interceptor chain is created per application attempt. But > the pipeline mapping uses application Id as key. So when a different attempt > comes in the same node, we need to recreate the interceptor chain for it, > instead of using the existing one. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6281) Cleanup when AMRMProxy fails to initialize a new interceptor chain
[ https://issues.apache.org/jira/browse/YARN-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Botong Huang updated YARN-6281: --- Attachment: YARN-6281.v2.patch > Cleanup when AMRMProxy fails to initialize a new interceptor chain > -- > > Key: YARN-6281 > URL: https://issues.apache.org/jira/browse/YARN-6281 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6281.v1.patch, YARN-6281.v2.patch > > > When a app starts, AMRMProxy.initializePipeline creates a new Interceptor > chain and add it to its pipeline mapping. Then it initializes the chain and > return. The problem is that when the chain initialization throws (e.g. > because of configuration error, interceptor class not found etc.), the chain > is not removed from AMRMProxy's pipeline mapping. > This patch also contains misc log message fixes in AMRMProxy. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6285) Add option to set max limit on ResourceManager for ApplicationClientProtocol.getApplications
[ https://issues.apache.org/jira/browse/YARN-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894960#comment-15894960 ] yunjiong zhao edited comment on YARN-6285 at 3/3/17 9:22 PM: - This patch allow set a max limit on RM for ApplicationClientProtocol.getApplications. Also in the log, it will tell cluster admin which user called the getApplications with bigger limit than the max limit like below {quote} INFO [main] resourcemanager.ClientRMService (ClientRMService.java:getApplications(878)) - User yunjzhao called getApplications with limit=9223372036854775807 {quote} was (Author: zhaoyunjiong): This patch allowed set a max limit on RM for ApplicationClientProtocol.getApplications. Also in the log, it will tell cluster admin which user called the getApplications with bigger limit than the max limit like below {quote} INFO [main] resourcemanager.ClientRMService (ClientRMService.java:getApplications(878)) - User yunjzhao called getApplications with limit=9223372036854775807 {quote} > Add option to set max limit on ResourceManager for > ApplicationClientProtocol.getApplications > > > Key: YARN-6285 > URL: https://issues.apache.org/jira/browse/YARN-6285 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: yunjiong zhao >Assignee: yunjiong zhao > Attachments: YARN-6285.001.patch > > > When users called ApplicationClientProtocol.getApplications, it will return > lots of data, and generate lots of garbage on ResourceManager which caused > long time GC. > For example, on one of our RM, when called rest API " http:// address:port>/ws/v1/cluster/apps" it can return 150MB data which have 944 > applications. > getApplications have limit parameter, but some user might not set it, and > then the limit will be Long.MAX_VALUE. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6275) Fail to show real-time tracking charts in SLS
[ https://issues.apache.org/jira/browse/YARN-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6275: --- Issue Type: Sub-task (was: Bug) Parent: YARN-5065 > Fail to show real-time tracking charts in SLS > - > > Key: YARN-6275 > URL: https://issues.apache.org/jira/browse/YARN-6275 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6275.001.patch > > > # Not put {{html}} directory under the current working directory. > # There is a bug in Class {{SLSWebApp}}, here is the stack trace: > {code} > java.lang.NullPointerException > at > org.eclipse.jetty.server.handler.ResourceHandler.handle(ResourceHandler.java:499) > at org.apache.hadoop.yarn.sls.web.SLSWebApp$1.handle(SLSWebApp.java:152) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:524) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5269) Bubble exceptions and errors all the way up the calls, including to clients.
[ https://issues.apache.org/jira/browse/YARN-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895038#comment-15895038 ] Haibo Chen commented on YARN-5269: -- IIRC, from one of our weekly discussions, [~gtCarrera9] pointed out that directly throwing exceptions in cases of failures is probably better than wrapping failures in Response since clients can ignore Response and have wrong expectation. Can you confirm that [~gtCarrera9]? > Bubble exceptions and errors all the way up the calls, including to clients. > > > Key: YARN-5269 > URL: https://issues.apache.org/jira/browse/YARN-5269 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis >Assignee: Varun Saxena > Labels: YARN-5355, yarn-5355-merge-blocker > > Currently we ignore (swallow) exception from the HBase side in many cases > (reads and writes). > Also, on the client side, neither TimelineClient#putEntities (the v2 flavor) > nor the #putEntitiesAsync method return any value. > For the second drop we may want to consider how we properly bubble up > exceptions throughout the write and reader call paths and if we want to > return a response in putEntities and some future kind of result for > putEntitiesAsync. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6275) Fail to show real-time tracking charts in SLS
[ https://issues.apache.org/jira/browse/YARN-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895023#comment-15895023 ] Robert Kanter commented on YARN-6275: - I think we should fix this to work out of the box instead of requiring users to copy-paste the 'html' directory. > Fail to show real-time tracking charts in SLS > - > > Key: YARN-6275 > URL: https://issues.apache.org/jira/browse/YARN-6275 > Project: Hadoop YARN > Issue Type: Bug > Components: scheduler-load-simulator >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6275.001.patch > > > # Not put {{html}} directory under the current working directory. > # There is a bug in Class {{SLSWebApp}}, here is the stack trace: > {code} > java.lang.NullPointerException > at > org.eclipse.jetty.server.handler.ResourceHandler.handle(ResourceHandler.java:499) > at org.apache.hadoop.yarn.sls.web.SLSWebApp$1.handle(SLSWebApp.java:152) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:524) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6218) Fix TestAMRMClient when using FairScheduler
[ https://issues.apache.org/jira/browse/YARN-6218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895010#comment-15895010 ] Ray Chiang commented on YARN-6218: -- +1. Checking this in soon. Filed YARN-6272 and YARN-6273 for some other test failures I'm seeing in other methods. > Fix TestAMRMClient when using FairScheduler > --- > > Key: YARN-6218 > URL: https://issues.apache.org/jira/browse/YARN-6218 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi >Priority: Minor > Attachments: YARN-6218.000.patch, YARN-6218.001.patch, > YARN-6218.002.patch > > > We ran into this issue on v2. Allocation does not happen in the specified > amount of time. > Error Message > expected:<2> but was:<0> > Stacktrace > java.lang.AssertionError: expected:<2> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.testAMRMClientMatchStorage(TestAMRMClient.java:495) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6272) TestAMRMClient#testAMRMClientWithContainerResourceChange fails intermittently
[ https://issues.apache.org/jira/browse/YARN-6272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895007#comment-15895007 ] Ray Chiang commented on YARN-6272: -- Latest stack trace: testAMRMClientWithContainerResourceChange[0](org.apache.hadoop.yarn.client.api.impl.TestAMRMClient) Time elapsed: 10.384 sec <<< FAILURE! java.lang.AssertionError: expected:<1> but was:<0> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.junit.Assert.assertEquals(Assert.java:542) at org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.doContainerResourceChange(TestAMRMClient.java:1127) at org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.testAMRMClientWithContainerResourceChange(TestAMRMClient.java:1003) > TestAMRMClient#testAMRMClientWithContainerResourceChange fails intermittently > - > > Key: YARN-6272 > URL: https://issues.apache.org/jira/browse/YARN-6272 > Project: Hadoop YARN > Issue Type: Test > Components: yarn >Affects Versions: 3.0.0-alpha3 >Reporter: Ray Chiang > > I'm seeing this unit test fail fairly often in trunk: > testAMRMClientWithContainerResourceChange(org.apache.hadoop.yarn.client.api.impl.TestAMRMClient) > Time elapsed: 5.113 sec <<< FAILURE! > java.lang.AssertionError: expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.doContainerResourceChange(TestAMRMClient.java:1087) > at > org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.testAMRMClientWithContainerResourceChange(TestAMRMClient.java:963) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6282) Recreate interceptor chain for different attemptId in the same node in AMRMProxy
[ https://issues.apache.org/jira/browse/YARN-6282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894989#comment-15894989 ] Hadoop QA commented on YARN-6282: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 33s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 20s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 1 new + 12 unchanged - 0 fixed = 13 total (was 12) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 48s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 52s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 50m 54s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6282 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855950/YARN-6282.v1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux faf6b670e117 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 490abfb | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15158/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15158/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15158/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Recreate interceptor chain for different attemptId in the same node in > AMRMProxy > > >
[jira] [Commented] (YARN-5269) Bubble exceptions and errors all the way up the calls, including to clients.
[ https://issues.apache.org/jira/browse/YARN-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894977#comment-15894977 ] Varun Saxena commented on YARN-5269: Agree that this would need client API changes so better to do it ASAP. We can probably return a Response object which contains a list of failures. > Bubble exceptions and errors all the way up the calls, including to clients. > > > Key: YARN-5269 > URL: https://issues.apache.org/jira/browse/YARN-5269 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis >Assignee: Varun Saxena > Labels: YARN-5355, yarn-5355-merge-blocker > > Currently we ignore (swallow) exception from the HBase side in many cases > (reads and writes). > Also, on the client side, neither TimelineClient#putEntities (the v2 flavor) > nor the #putEntitiesAsync method return any value. > For the second drop we may want to consider how we properly bubble up > exceptions throughout the write and reader call paths and if we want to > return a response in putEntities and some future kind of result for > putEntitiesAsync. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6281) Cleanup when AMRMProxy fails to initialize a new interceptor chain
[ https://issues.apache.org/jira/browse/YARN-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894972#comment-15894972 ] Hadoop QA commented on YARN-6281: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 15s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 1 new + 3 unchanged - 0 fixed = 4 total (was 3) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 29s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 32m 43s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6281 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855951/YARN-6281.v1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux d5b55428d847 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 490abfb | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15159/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15159/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15159/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Cleanup when AMRMProxy fails to initialize a new interceptor
[jira] [Comment Edited] (YARN-6256) Add FROM_ID info key for timeline entities in reader response.
[ https://issues.apache.org/jira/browse/YARN-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894962#comment-15894962 ] Varun Saxena edited comment on YARN-6256 at 3/3/17 8:12 PM: Thanks [~rohithsharma] for the patch. It looks pretty close. A couple of nits. # When we are encoding and decoding as strings, app ids', flow run id, etc. cannot have separator char i.e."!". So the static block in TestRowKeysAsString is not required. It was required in TestRowKeys because when we convert long / int to bytes it may have same byte sequence as separator char so that had to be tested. In TestRowKeysAsString we can use any fixed flow run ID and App ID. We do not need to use Separator#QUALIFIERS in the test class as well. # There are changes in TimelineEntityFilters javadoc which is not really necessary. It is there due to some line formatting. This unnecessarily increases size of the patch. If possible exclude these changes. # In testEntityRowKey better to use separator and escape char constants. Other than this, it looks fine. Checkstyle need not be fixed. was (Author: varun_saxena): Thanks [~rohithsharma] for the patch. It looks pretty close. A couple of nits. # When we are encoding and decoding as strings, app ids', flow run id, etc. cannot have separator char i.e."!". So the static block in TestRowKeysAsString is not required. It was required in TestRowKeys because when we convert long / int to bytes it may have same byte sequence as separator char so that had to be tested. In TestRowKeysAsString we can use any fixed flow run ID and App ID. We do not need to use Separator#QUALIFIERS in the test class as well. # There are changes in TimelineEntityFilters javadoc which is not really necessary. It is there due to some line formatting. This unnecessarily increases size of the patch. If possible exclude these changes. Other than this, it looks fine. Checkstyle need not be fixed. > Add FROM_ID info key for timeline entities in reader response. > --- > > Key: YARN-6256 > URL: https://issues.apache.org/jira/browse/YARN-6256 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6256-YARN-5355.0001.patch, > YARN-6256-YARN-5355.0002.patch > > > It is continuation with YARN-6027 to add FROM_ID key in all other timeline > entity responses which includes > # Flow run entity response. > # Application entity response > # Generic timeline entity response - Here we need to retrospect on idprefix > filter which is now separately provided. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6256) Add FROM_ID info key for timeline entities in reader response.
[ https://issues.apache.org/jira/browse/YARN-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894962#comment-15894962 ] Varun Saxena commented on YARN-6256: Thanks [~rohithsharma] for the patch. It looks pretty close. A couple of nits. # When we are encoding and decoding as strings, app ids', flow run id, etc. cannot have separator char i.e."!". So the static block in TestRowKeysAsString is not required. It was required in TestRowKeys because when we convert long / int to bytes it may have same byte sequence as separator char so that had to be tested. In TestRowKeysAsString we can use any fixed flow run ID and App ID. We do not need to use Separator#QUALIFIERS in the test class as well. # There are changes in TimelineEntityFilters javadoc which is not really necessary. It is there due to some line formatting. This unnecessarily increases size of the patch. If possible exclude these changes. Other than this, it looks fine. Checkstyle need not be fixed. > Add FROM_ID info key for timeline entities in reader response. > --- > > Key: YARN-6256 > URL: https://issues.apache.org/jira/browse/YARN-6256 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6256-YARN-5355.0001.patch, > YARN-6256-YARN-5355.0002.patch > > > It is continuation with YARN-6027 to add FROM_ID key in all other timeline > entity responses which includes > # Flow run entity response. > # Application entity response > # Generic timeline entity response - Here we need to retrospect on idprefix > filter which is now separately provided. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6285) Add option to set max limit on ResourceManager for ApplicationClientProtocol.getApplications
[ https://issues.apache.org/jira/browse/YARN-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] yunjiong zhao updated YARN-6285: Attachment: YARN-6285.001.patch This patch allowed set a max limit on RM for ApplicationClientProtocol.getApplications. Also in the log, it will tell cluster admin which user called the getApplications with bigger limit than the max limit like below {quote} INFO [main] resourcemanager.ClientRMService (ClientRMService.java:getApplications(878)) - User yunjzhao called getApplications with limit=9223372036854775807 {quote} > Add option to set max limit on ResourceManager for > ApplicationClientProtocol.getApplications > > > Key: YARN-6285 > URL: https://issues.apache.org/jira/browse/YARN-6285 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: yunjiong zhao >Assignee: yunjiong zhao > Attachments: YARN-6285.001.patch > > > When users called ApplicationClientProtocol.getApplications, it will return > lots of data, and generate lots of garbage on ResourceManager which caused > long time GC. > For example, on one of our RM, when called rest API " http:// address:port>/ws/v1/cluster/apps" it can return 150MB data which have 944 > applications. > getApplications have limit parameter, but some user might not set it, and > then the limit will be Long.MAX_VALUE. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6285) Add option to set max limit on ResourceManager for ApplicationClientProtocol.getApplications
yunjiong zhao created YARN-6285: --- Summary: Add option to set max limit on ResourceManager for ApplicationClientProtocol.getApplications Key: YARN-6285 URL: https://issues.apache.org/jira/browse/YARN-6285 Project: Hadoop YARN Issue Type: Improvement Reporter: yunjiong zhao Assignee: yunjiong zhao When users called ApplicationClientProtocol.getApplications, it will return lots of data, and generate lots of garbage on ResourceManager which caused long time GC. For example, on one of our RM, when called rest API " http:///ws/v1/cluster/apps" it can return 150MB data which have 944 applications. getApplications have limit parameter, but some user might not set it, and then the limit will be Long.MAX_VALUE. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6275) Fail to show real-time tracking charts in SLS
[ https://issues.apache.org/jira/browse/YARN-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894928#comment-15894928 ] Hadoop QA commented on YARN-6275: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 53s{color} | {color:green} hadoop-sls in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 19m 40s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6275 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855947/YARN-6275.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c7ccacf1d8e7 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 490abfb | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15157/testReport/ | | modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15157/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fail to show real-time tracking charts in SLS > - > > Key: YARN-6275 > URL: https://issues.apache.org/jira/browse/YARN-6275 > Project: Hadoop YARN > Issue Type: Bug > Components: scheduler-load-simulator >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6275.001.patch > > > # Not put {{html}} directory under the current working directory. > # There is a
[jira] [Updated] (YARN-6281) Cleanup when AMRMProxy fails to initialize a new interceptor chain
[ https://issues.apache.org/jira/browse/YARN-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Botong Huang updated YARN-6281: --- Attachment: YARN-6281.v1.patch > Cleanup when AMRMProxy fails to initialize a new interceptor chain > -- > > Key: YARN-6281 > URL: https://issues.apache.org/jira/browse/YARN-6281 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6281.v1.patch > > > When a app starts, AMRMProxy.initializePipeline creates a new Interceptor > chain and add it to its pipeline mapping. Then it initializes the chain and > return. The problem is that when the chain initialization throws (e.g. > because of configuration error, interceptor class not found etc.), the chain > is not removed from AMRMProxy's pipeline mapping. > This patch also contains misc log message fixes in AMRMProxy. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6282) Recreate interceptor chain for different attemptId in the same node in AMRMProxy
[ https://issues.apache.org/jira/browse/YARN-6282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Botong Huang updated YARN-6282: --- Attachment: YARN-6282.v1.patch > Recreate interceptor chain for different attemptId in the same node in > AMRMProxy > > > Key: YARN-6282 > URL: https://issues.apache.org/jira/browse/YARN-6282 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6282.v1.patch > > > In AMRMProxy, an interceptor chain is created per application attempt. But > the pipeline mapping uses application Id as key. So when a different attempt > comes in the same node, we need to recreate the interceptor chain for it, > instead of using the existing one. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6256) Add FROM_ID info key for timeline entities in reader response.
[ https://issues.apache.org/jira/browse/YARN-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894904#comment-15894904 ] Hadoop QA commented on YARN-6256: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 42s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 41s{color} | {color:green} YARN-5355 passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 25s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase in YARN-5355 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} YARN-5355 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 28s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 9 new + 34 unchanged - 9 fixed = 43 total (was 43) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 41s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 45s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 33m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | YARN-6256 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855943/YARN-6256-YARN-5355.0002.patch | | Optional
[jira] [Updated] (YARN-6275) Fail to show real-time tracking charts in SLS
[ https://issues.apache.org/jira/browse/YARN-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6275: --- Description: # Not put {{html}} directory under the current working directory. # There is a bug in Class {{SLSWebApp}}, here is the stack trace: {code} java.lang.NullPointerException at org.eclipse.jetty.server.handler.ResourceHandler.handle(ResourceHandler.java:499) at org.apache.hadoop.yarn.sls.web.SLSWebApp$1.handle(SLSWebApp.java:152) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:524) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.lang.Thread.run(Thread.java:745) {code} was: # Make sure to put {{html}} directory under the current working directory. # There is a bug in Class {{SLSWebApp}}, here is the stack trace: {code} java.lang.NullPointerException at org.eclipse.jetty.server.handler.ResourceHandler.handle(ResourceHandler.java:499) at org.apache.hadoop.yarn.sls.web.SLSWebApp$1.handle(SLSWebApp.java:152) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:524) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.lang.Thread.run(Thread.java:745) {code} > Fail to show real-time tracking charts in SLS > - > > Key: YARN-6275 > URL: https://issues.apache.org/jira/browse/YARN-6275 > Project: Hadoop YARN > Issue Type: Bug > Components: scheduler-load-simulator >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6275.001.patch > > > # Not put {{html}} directory under the current working directory. > # There is a bug in Class {{SLSWebApp}}, here is the stack trace: > {code} > java.lang.NullPointerException > at > org.eclipse.jetty.server.handler.ResourceHandler.handle(ResourceHandler.java:499) > at org.apache.hadoop.yarn.sls.web.SLSWebApp$1.handle(SLSWebApp.java:152) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:524) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) > at >
[jira] [Created] (YARN-6284) hasAlreadyRun should be final in ResourceManager.StandbyTransitionRunnable
Daniel Templeton created YARN-6284: -- Summary: hasAlreadyRun should be final in ResourceManager.StandbyTransitionRunnable Key: YARN-6284 URL: https://issues.apache.org/jira/browse/YARN-6284 Project: Hadoop YARN Issue Type: Improvement Components: resourcemanager Affects Versions: 3.0.0-alpha2 Reporter: Daniel Templeton {code} // The atomic variable to make sure multiple threads with the same runnable // run only once. private AtomicBoolean hasAlreadyRun = new AtomicBoolean(false); {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6284) hasAlreadyRun should be final in ResourceManager.StandByTransitionRunnable
[ https://issues.apache.org/jira/browse/YARN-6284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated YARN-6284: --- Summary: hasAlreadyRun should be final in ResourceManager.StandByTransitionRunnable (was: hasAlreadyRun should be final in ResourceManager.StandbyTransitionRunnable) > hasAlreadyRun should be final in ResourceManager.StandByTransitionRunnable > -- > > Key: YARN-6284 > URL: https://issues.apache.org/jira/browse/YARN-6284 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Affects Versions: 3.0.0-alpha2 >Reporter: Daniel Templeton > Labels: newbie > > {code} > // The atomic variable to make sure multiple threads with the same > runnable > // run only once. > private AtomicBoolean hasAlreadyRun = new AtomicBoolean(false); > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6283) Improve "other" state store exception log message in RMStateStore.notifyStoreOperationFailed()
Daniel Templeton created YARN-6283: -- Summary: Improve "other" state store exception log message in RMStateStore.notifyStoreOperationFailed() Key: YARN-6283 URL: https://issues.apache.org/jira/browse/YARN-6283 Project: Hadoop YARN Issue Type: Improvement Components: resourcemanager Affects Versions: 3.0.0-alpha2 Reporter: Daniel Templeton Priority: Minor Currently, if {{notifyStateStoreOperationFailed()}} is called when both HA and fail-fast are disabled the message logged is, "Skip the state-store error." For a warn-level message, that's pretty useless. Instead, the message should include information about the exception that was passed in and provide a little context about what's going on. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6275) Fail to show real-time tracking charts in SLS
[ https://issues.apache.org/jira/browse/YARN-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6275: --- Attachment: YARN-6275.001.patch > Fail to show real-time tracking charts in SLS > - > > Key: YARN-6275 > URL: https://issues.apache.org/jira/browse/YARN-6275 > Project: Hadoop YARN > Issue Type: Bug > Components: scheduler-load-simulator >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6275.001.patch > > > # Make sure to put {{html}} directory under the current working directory. > # There is a bug in Class {{SLSWebApp}}, here is the stack trace: > {code} > java.lang.NullPointerException > at > org.eclipse.jetty.server.handler.ResourceHandler.handle(ResourceHandler.java:499) > at org.apache.hadoop.yarn.sls.web.SLSWebApp$1.handle(SLSWebApp.java:152) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:524) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6275) Fail to show real-time tracking charts in SLS
[ https://issues.apache.org/jira/browse/YARN-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6275: --- Description: # Make sure to put {{html}} directory under the current working directory. # There is a bug in Class {{SLSWebApp}}, here is the stack trace: {code} java.lang.NullPointerException at org.eclipse.jetty.server.handler.ResourceHandler.handle(ResourceHandler.java:499) at org.apache.hadoop.yarn.sls.web.SLSWebApp$1.handle(SLSWebApp.java:152) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:524) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.lang.Thread.run(Thread.java:745) {code} was: Stack trace: {code} java.lang.NullPointerException at org.eclipse.jetty.server.handler.ResourceHandler.handle(ResourceHandler.java:499) at org.apache.hadoop.yarn.sls.web.SLSWebApp$1.handle(SLSWebApp.java:152) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:524) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.lang.Thread.run(Thread.java:745) {code} > Fail to show real-time tracking charts in SLS > - > > Key: YARN-6275 > URL: https://issues.apache.org/jira/browse/YARN-6275 > Project: Hadoop YARN > Issue Type: Bug > Components: scheduler-load-simulator >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > > # Make sure to put {{html}} directory under the current working directory. > # There is a bug in Class {{SLSWebApp}}, here is the stack trace: > {code} > java.lang.NullPointerException > at > org.eclipse.jetty.server.handler.ResourceHandler.handle(ResourceHandler.java:499) > at org.apache.hadoop.yarn.sls.web.SLSWebApp$1.handle(SLSWebApp.java:152) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:524) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) > at >
[jira] [Updated] (YARN-6282) Recreate interceptor chain for different attemptId in the same node in AMRMProxy
[ https://issues.apache.org/jira/browse/YARN-6282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Botong Huang updated YARN-6282: --- Summary: Recreate interceptor chain for different attemptId in the same node in AMRMProxy (was: Recreate interceptor chain when different attempt in the same node in AMRMProxy) > Recreate interceptor chain for different attemptId in the same node in > AMRMProxy > > > Key: YARN-6282 > URL: https://issues.apache.org/jira/browse/YARN-6282 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > > In AMRMProxy, an interceptor chain is created per application attempt. But > the pipeline mapping uses application Id as key. So when a different attempt > comes in the same node, we need to recreate the interceptor chain for it, > instead of using the existing one. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5269) Bubble exceptions and errors all the way up the calls, including to clients.
[ https://issues.apache.org/jira/browse/YARN-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894869#comment-15894869 ] Haibo Chen commented on YARN-5269: -- Labeling this as a yarn-5355 merge blocker since it is client-facing which should probably resolve sooner rather than later. > Bubble exceptions and errors all the way up the calls, including to clients. > > > Key: YARN-5269 > URL: https://issues.apache.org/jira/browse/YARN-5269 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis >Assignee: Varun Saxena > Labels: YARN-5355, yarn-5355-merge-blocker > > Currently we ignore (swallow) exception from the HBase side in many cases > (reads and writes). > Also, on the client side, neither TimelineClient#putEntities (the v2 flavor) > nor the #putEntitiesAsync method return any value. > For the second drop we may want to consider how we properly bubble up > exceptions throughout the write and reader call paths and if we want to > return a response in putEntities and some future kind of result for > putEntitiesAsync. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6282) Recreate interceptor chain when different attempt in the same node in AMRMProxy
[ https://issues.apache.org/jira/browse/YARN-6282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Botong Huang updated YARN-6282: --- Issue Type: Sub-task (was: Bug) Parent: YARN-2915 > Recreate interceptor chain when different attempt in the same node in > AMRMProxy > --- > > Key: YARN-6282 > URL: https://issues.apache.org/jira/browse/YARN-6282 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > > In AMRMProxy, an interceptor chain is created per application attempt. But > the pipeline mapping uses application Id as key. So when a different attempt > comes in the same node, we need to recreate the interceptor chain for it, > instead of using the existing one. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6282) Recreate interceptor chain when different attempt in the same node in AMRMProxy
Botong Huang created YARN-6282: -- Summary: Recreate interceptor chain when different attempt in the same node in AMRMProxy Key: YARN-6282 URL: https://issues.apache.org/jira/browse/YARN-6282 Project: Hadoop YARN Issue Type: Bug Reporter: Botong Huang Assignee: Botong Huang Priority: Minor In AMRMProxy, an interceptor chain is created per application attempt. But the pipeline mapping uses application Id as key. So when a different attempt comes in the same node, we need to recreate the interceptor chain for it, instead of using the existing one. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5269) Bubble exceptions and errors all the way up the calls, including to clients.
[ https://issues.apache.org/jira/browse/YARN-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-5269: - Labels: YARN-5355 yarn-5355-merge-blocker (was: YARN-5355) > Bubble exceptions and errors all the way up the calls, including to clients. > > > Key: YARN-5269 > URL: https://issues.apache.org/jira/browse/YARN-5269 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis >Assignee: Varun Saxena > Labels: YARN-5355, yarn-5355-merge-blocker > > Currently we ignore (swallow) exception from the HBase side in many cases > (reads and writes). > Also, on the client side, neither TimelineClient#putEntities (the v2 flavor) > nor the #putEntitiesAsync method return any value. > For the second drop we may want to consider how we properly bubble up > exceptions throughout the write and reader call paths and if we want to > return a response in putEntities and some future kind of result for > putEntitiesAsync. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6281) Cleanup when AMRMProxy fails to initialize a new interceptor chain
[ https://issues.apache.org/jira/browse/YARN-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Botong Huang updated YARN-6281: --- Issue Type: Sub-task (was: Bug) Parent: YARN-2915 > Cleanup when AMRMProxy fails to initialize a new interceptor chain > -- > > Key: YARN-6281 > URL: https://issues.apache.org/jira/browse/YARN-6281 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > > When a app starts, AMRMProxy.initializePipeline creates a new Interceptor > chain and add it to its pipeline mapping. Then it initializes the chain and > return. The problem is that when the chain initialization throws (e.g. > because of configuration error, interceptor class not found etc.), the chain > is not removed from AMRMProxy's pipeline mapping. > This patch also contains misc log message fixes in AMRMProxy. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6042) Dump scheduler and queue state information into FairScheduler DEBUG log
[ https://issues.apache.org/jira/browse/YARN-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894782#comment-15894782 ] Yufei Gu edited comment on YARN-6042 at 3/3/17 6:58 PM: [~Tao Jie], YARN-5437 is an umbrella which adds useful messages of scheduler in WebUI. YARN-4329 is the FS part, but we can always add more useful information in WebUI. Not a bad idea to keep improvement on that. Other than that, we need add more queue metrics on scheduler WebUI, which I cannot remember a JIRA for that. This JIRA will potentially dump a very long message, there has been a link for the RM log file in WebUI, and adding a link for the new log file may be a reasonable solution. was (Author: yufeigu): [~Tao Jie], YARN-5437 is an umbrella which adds useful message of scheduler in WebUI. YARN-4329 is the FS part, but we can always add more useful information in WebUI. Not a bad idea to keep improvement on that. Other than that, we need add more queue metrics on scheduler WebUI, which I cannot remember a JIRA for that. This JIRA will potentially dump a very long message, add a link to WebUI for the new log file will be a reasonable solution. > Dump scheduler and queue state information into FairScheduler DEBUG log > --- > > Key: YARN-6042 > URL: https://issues.apache.org/jira/browse/YARN-6042 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6042.001.patch, YARN-6042.002.patch, > YARN-6042.003.patch, YARN-6042.004.patch, YARN-6042.005.patch, > YARN-6042.006.patch, YARN-6042.007.patch, YARN-6042.008.patch > > > To improve the debugging of scheduler issues it would be a big improvement to > be able to dump the scheduler state into a log on request. > The Dump the scheduler state at a point in time would allow debugging of a > scheduler that is not hung (deadlocked) but also not assigning containers. > Currently we do not have a proper overview of what state the scheduler and > the queues are in and we have to make assumptions or guess > The scheduler and queue state needed would include (not exhaustive): > - instantaneous and steady fair share (app / queue) > - AM share and resources > - weight > - app demand > - application run state (runnable/non runnable) > - last time at fair/min share -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6281) Cleanup when AMRMProxy fails to initialize a new interceptor chain
[ https://issues.apache.org/jira/browse/YARN-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Botong Huang updated YARN-6281: --- Description: When a app starts, AMRMProxy.initializePipeline creates a new Interceptor chain and add it to its pipeline mapping. Then it initializes the chain and return. The problem is that when the chain initialization throws (e.g. because of configuration error, interceptor class not found etc.), the chain is not removed from AMRMProxy's pipeline mapping. This patch also contains misc log message fixes in AMRMProxy. was:When a app starts, AMRMProxy.initializePipeline creates a new Interceptor chain and add it to its pipeline mapping. Then it initializes the chain and return. The problem is that when the chain initialization throws (e.g. because of configuration error, interceptor class not found etc.), the chain is not removed from AMRMProxy's pipeline mapping. > Cleanup when AMRMProxy fails to initialize a new interceptor chain > -- > > Key: YARN-6281 > URL: https://issues.apache.org/jira/browse/YARN-6281 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > > When a app starts, AMRMProxy.initializePipeline creates a new Interceptor > chain and add it to its pipeline mapping. Then it initializes the chain and > return. The problem is that when the chain initialization throws (e.g. > because of configuration error, interceptor class not found etc.), the chain > is not removed from AMRMProxy's pipeline mapping. > This patch also contains misc log message fixes in AMRMProxy. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6256) Add FROM_ID info key for timeline entities in reader response.
[ https://issues.apache.org/jira/browse/YARN-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-6256: Attachment: YARN-6256-YARN-5355.0002.patch updated patch fixing review comments and test failures > Add FROM_ID info key for timeline entities in reader response. > --- > > Key: YARN-6256 > URL: https://issues.apache.org/jira/browse/YARN-6256 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6256-YARN-5355.0001.patch, > YARN-6256-YARN-5355.0002.patch > > > It is continuation with YARN-6027 to add FROM_ID key in all other timeline > entity responses which includes > # Flow run entity response. > # Application entity response > # Generic timeline entity response - Here we need to retrospect on idprefix > filter which is now separately provided. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6281) Cleanup when AMRMProxy fails to initialize a new interceptor chain
Botong Huang created YARN-6281: -- Summary: Cleanup when AMRMProxy fails to initialize a new interceptor chain Key: YARN-6281 URL: https://issues.apache.org/jira/browse/YARN-6281 Project: Hadoop YARN Issue Type: Bug Reporter: Botong Huang Assignee: Botong Huang Priority: Minor When a app starts, AMRMProxy.initializePipeline creates a new Interceptor chain and add it to its pipeline mapping. Then it initializes the chain and return. The problem is that when the chain initialization throws (e.g. because of configuration error, interceptor class not found etc.), the chain is not removed from AMRMProxy's pipeline mapping. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6275) Fail to show real-time tracking charts in SLS
[ https://issues.apache.org/jira/browse/YARN-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894837#comment-15894837 ] Robert Kanter commented on YARN-6275: - I'm pretty sure this was broken by HADOOP-10075 when we did the Jetty 9 upgrade. > Fail to show real-time tracking charts in SLS > - > > Key: YARN-6275 > URL: https://issues.apache.org/jira/browse/YARN-6275 > Project: Hadoop YARN > Issue Type: Bug > Components: scheduler-load-simulator >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > > Stack trace: > {code} > java.lang.NullPointerException > at > org.eclipse.jetty.server.handler.ResourceHandler.handle(ResourceHandler.java:499) > at org.apache.hadoop.yarn.sls.web.SLSWebApp$1.handle(SLSWebApp.java:152) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:524) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6270) WebUtils.getRMWebAppURLWithScheme() needs to honor RM HA setting
[ https://issues.apache.org/jira/browse/YARN-6270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894829#comment-15894829 ] Jian He commented on YARN-6270: --- lgtm > WebUtils.getRMWebAppURLWithScheme() needs to honor RM HA setting > > > Key: YARN-6270 > URL: https://issues.apache.org/jira/browse/YARN-6270 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Sumana Sathish >Assignee: Xuan Gong > Attachments: YARN-6270.1.patch > > > yarn log cli: yarn logs -applicationId application_1488441635386_0005 -am 1 > failed with the connection exception when HA is enabled > {code} > Unable to get AM container informations for the > application:application_1488441635386_0005 > java.net.ConnectException: Connection refused (Connection refused) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6278) -Pyarn-ui build seems broken in trunk
[ https://issues.apache.org/jira/browse/YARN-6278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894826#comment-15894826 ] Sreenath Somarajapuram commented on YARN-6278: -- The changes looks good to me. > -Pyarn-ui build seems broken in trunk > - > > Key: YARN-6278 > URL: https://issues.apache.org/jira/browse/YARN-6278 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Sunil G >Assignee: Sunil G >Priority: Critical > Attachments: YARN-6278.0001.patch > > > Link to the error is > [here|https://builds.apache.org/job/PreCommit-HDFS-Build/18535/artifact/patchprocess/patch-compile-root.txt] > {code} > qunit-notifications#0.1.1 bower_components/qunit-notifications > [INFO] > [INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui --- > /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node_modules/merge-trees/index.js:33 > class MergeTrees { > ^ > Unexpected reserved word > SyntaxError: Unexpected reserved word > at Module._compile (module.js:439:25) > at Object.Module._extensions..js (module.js:474:10) > at Module.load (module.js:356:32) > at Function.Module._load (module.js:312:12) > at Module.require (module.js:364:17) > at require (module.js:380:17) > at Object. > (/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node_modules/broccoli-merge-trees/index.js:2:18) > at Module._compile (module.js:456:26) > at Object.Module._extensions..js (module.js:474:10) > at Module.load (module.js:356:32) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4266) Allow whitelisted users to disable user re-mapping/squashing when launching docker containers
[ https://issues.apache.org/jira/browse/YARN-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894809#comment-15894809 ] Daniel Templeton commented on YARN-4266: Thanks for testing the patch, [~ebadger]! The reason we can't just specify the user in the run command is that the NM will write the launch script and all the other data for the job into the working directory owned by the job owner. We then mount that directory into the Docker container and exec it. If we set the user in the run command, the Docker container wouldn't have the permissions to access the directory or launch the script. If we tried to write the working directory et al as the user ID we intend to use in the Docker container, we open a potential security hole, because the user with that ID on the NM would be able to access it. > Allow whitelisted users to disable user re-mapping/squashing when launching > docker containers > - > > Key: YARN-4266 > URL: https://issues.apache.org/jira/browse/YARN-4266 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Sidharta Seethana >Assignee: Zhankun Tang > Attachments: YARN-4266.001.patch, > YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping.pdf, > YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v2.pdf, > YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v3.pdf, > YARN-4266-branch-2.8.001.patch > > > Docker provides a mechanism (the --user switch) that enables us to specify > the user the container processes should run as. We use this mechanism today > when launching docker containers . In non-secure mode, we run the docker > container based on > `yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user` and in > secure mode, as the submitting user. However, this mechanism breaks down with > a large number of 'pre-created' images which don't necessarily have the users > available within the image. Examples of such images include shared images > that need to be used by multiple users. We need a way in which we can allow a > pre-defined set of users to run containers based on existing images, without > using the --user switch. There are some implications of disabling this user > squashing that we'll need to work through : log aggregation, artifact > deletion etc., -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6218) Fix TestAMRMClient when using FairScheduler
[ https://issues.apache.org/jira/browse/YARN-6218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated YARN-6218: - Summary: Fix TestAMRMClient when using FairScheduler (was: TestAMRMClient fails with fair scheduler) > Fix TestAMRMClient when using FairScheduler > --- > > Key: YARN-6218 > URL: https://issues.apache.org/jira/browse/YARN-6218 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi >Priority: Minor > Attachments: YARN-6218.000.patch, YARN-6218.001.patch, > YARN-6218.002.patch > > > We ran into this issue on v2. Allocation does not happen in the specified > amount of time. > Error Message > expected:<2> but was:<0> > Stacktrace > java.lang.AssertionError: expected:<2> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.testAMRMClientMatchStorage(TestAMRMClient.java:495) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6278) -Pyarn-ui build seems broken in trunk
[ https://issues.apache.org/jira/browse/YARN-6278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894791#comment-15894791 ] Hadoop QA commented on YARN-6278: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 32s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 5m 17s{color} | {color:red} hadoop-yarn-ui in trunk failed. {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 6s{color} | {color:green} hadoop-yarn-ui in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 27m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6278 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855924/YARN-6278.0001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux ab965f57d1f4 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 490abfb | | Default Java | 1.8.0_121 | | compile | https://builds.apache.org/job/PreCommit-YARN-Build/15155/artifact/patchprocess/branch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15155/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15155/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > -Pyarn-ui build seems broken in trunk > - > > Key: YARN-6278 > URL: https://issues.apache.org/jira/browse/YARN-6278 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Sunil G >Assignee: Sunil G >Priority: Critical > Attachments: YARN-6278.0001.patch > > > Link to the error is > [here|https://builds.apache.org/job/PreCommit-HDFS-Build/18535/artifact/patchprocess/patch-compile-root.txt] > {code} > qunit-notifications#0.1.1 bower_components/qunit-notifications > [INFO] > [INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui --- >
[jira] [Commented] (YARN-6042) Dump scheduler and queue state information into FairScheduler DEBUG log
[ https://issues.apache.org/jira/browse/YARN-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894782#comment-15894782 ] Yufei Gu commented on YARN-6042: [~Tao Jie], YARN-5437 is an umbrella which adds useful message of scheduler in WebUI. YARN-4329 is the FS part, but we can always add more useful information in WebUI. Not a bad idea to keep improvement on that. Other than that, we need add more queue metrics on scheduler WebUI, which I cannot remember a JIRA for that. This JIRA will potentially dump a very long message, add a link to WebUI for the new log file will be a reasonable solution. > Dump scheduler and queue state information into FairScheduler DEBUG log > --- > > Key: YARN-6042 > URL: https://issues.apache.org/jira/browse/YARN-6042 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6042.001.patch, YARN-6042.002.patch, > YARN-6042.003.patch, YARN-6042.004.patch, YARN-6042.005.patch, > YARN-6042.006.patch, YARN-6042.007.patch, YARN-6042.008.patch > > > To improve the debugging of scheduler issues it would be a big improvement to > be able to dump the scheduler state into a log on request. > The Dump the scheduler state at a point in time would allow debugging of a > scheduler that is not hung (deadlocked) but also not assigning containers. > Currently we do not have a proper overview of what state the scheduler and > the queues are in and we have to make assumptions or guess > The scheduler and queue state needed would include (not exhaustive): > - instantaneous and steady fair share (app / queue) > - AM share and resources > - weight > - app demand > - application run state (runnable/non runnable) > - last time at fair/min share -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4266) Allow whitelisted users to disable user re-mapping/squashing when launching docker containers
[ https://issues.apache.org/jira/browse/YARN-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894748#comment-15894748 ] Eric Badger commented on YARN-4266: --- Also, as an aside, is there any reason that we want to go with the usermod approach instead of leveraging docker to do this for us in the run command? https://docs.docker.com/engine/reference/run/#user We can still use the {{--user}} flag, but instead of passing a username we can just pass the UID/GID of runAsUser. According to the docker documentation, it will create the user if it doesn't exist, which means that we don't need to have a predefined user, plus we don't need to start the container as root. Until we get to user namespace remapping as [~shaneku...@gmail.com] eluded to earlier, it seems to me that this would be a less hacky way to get around the permissions issues. Thoughts? cc: [~tangzhankun], [~sidharta-s], [~dan...@cloudera.com] > Allow whitelisted users to disable user re-mapping/squashing when launching > docker containers > - > > Key: YARN-4266 > URL: https://issues.apache.org/jira/browse/YARN-4266 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Sidharta Seethana >Assignee: Zhankun Tang > Attachments: YARN-4266.001.patch, > YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping.pdf, > YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v2.pdf, > YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v3.pdf, > YARN-4266-branch-2.8.001.patch > > > Docker provides a mechanism (the --user switch) that enables us to specify > the user the container processes should run as. We use this mechanism today > when launching docker containers . In non-secure mode, we run the docker > container based on > `yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user` and in > secure mode, as the submitting user. However, this mechanism breaks down with > a large number of 'pre-created' images which don't necessarily have the users > available within the image. Examples of such images include shared images > that need to be used by multiple users. We need a way in which we can allow a > pre-defined set of users to run containers based on existing images, without > using the --user switch. There are some implications of disabling this user > squashing that we'll need to work through : log aggregation, artifact > deletion etc., -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-6278) -Pyarn-ui build seems broken in trunk
[ https://issues.apache.org/jira/browse/YARN-6278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G reassigned YARN-6278: - Assignee: Sunil G > -Pyarn-ui build seems broken in trunk > - > > Key: YARN-6278 > URL: https://issues.apache.org/jira/browse/YARN-6278 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Sunil G >Assignee: Sunil G >Priority: Critical > Attachments: YARN-6278.0001.patch > > > Link to the error is > [here|https://builds.apache.org/job/PreCommit-HDFS-Build/18535/artifact/patchprocess/patch-compile-root.txt] > {code} > qunit-notifications#0.1.1 bower_components/qunit-notifications > [INFO] > [INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui --- > /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node_modules/merge-trees/index.js:33 > class MergeTrees { > ^ > Unexpected reserved word > SyntaxError: Unexpected reserved word > at Module._compile (module.js:439:25) > at Object.Module._extensions..js (module.js:474:10) > at Module.load (module.js:356:32) > at Function.Module._load (module.js:312:12) > at Module.require (module.js:364:17) > at require (module.js:380:17) > at Object. > (/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node_modules/broccoli-merge-trees/index.js:2:18) > at Module._compile (module.js:456:26) > at Object.Module._extensions..js (module.js:474:10) > at Module.load (module.js:356:32) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6278) -Pyarn-ui build seems broken in trunk
[ https://issues.apache.org/jira/browse/YARN-6278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894730#comment-15894730 ] Sunil G edited comment on YARN-6278 at 3/3/17 5:25 PM: --- Attaching patch after forcing correct node and npm version in compile machine. was (Author: sunilg): Attaching patch after forcing clear node and npm version in compile machine. > -Pyarn-ui build seems broken in trunk > - > > Key: YARN-6278 > URL: https://issues.apache.org/jira/browse/YARN-6278 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Sunil G >Priority: Critical > Attachments: YARN-6278.0001.patch > > > Link to the error is > [here|https://builds.apache.org/job/PreCommit-HDFS-Build/18535/artifact/patchprocess/patch-compile-root.txt] > {code} > qunit-notifications#0.1.1 bower_components/qunit-notifications > [INFO] > [INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui --- > /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node_modules/merge-trees/index.js:33 > class MergeTrees { > ^ > Unexpected reserved word > SyntaxError: Unexpected reserved word > at Module._compile (module.js:439:25) > at Object.Module._extensions..js (module.js:474:10) > at Module.load (module.js:356:32) > at Function.Module._load (module.js:312:12) > at Module.require (module.js:364:17) > at require (module.js:380:17) > at Object. > (/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node_modules/broccoli-merge-trees/index.js:2:18) > at Module._compile (module.js:456:26) > at Object.Module._extensions..js (module.js:474:10) > at Module.load (module.js:356:32) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6278) -Pyarn-ui build seems broken in trunk
[ https://issues.apache.org/jira/browse/YARN-6278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-6278: -- Attachment: YARN-6278.0001.patch Attaching patch after forcing clear node and npm version in compile machine. > -Pyarn-ui build seems broken in trunk > - > > Key: YARN-6278 > URL: https://issues.apache.org/jira/browse/YARN-6278 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Sunil G >Priority: Critical > Attachments: YARN-6278.0001.patch > > > Link to the error is > [here|https://builds.apache.org/job/PreCommit-HDFS-Build/18535/artifact/patchprocess/patch-compile-root.txt] > {code} > qunit-notifications#0.1.1 bower_components/qunit-notifications > [INFO] > [INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui --- > /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node_modules/merge-trees/index.js:33 > class MergeTrees { > ^ > Unexpected reserved word > SyntaxError: Unexpected reserved word > at Module._compile (module.js:439:25) > at Object.Module._extensions..js (module.js:474:10) > at Module.load (module.js:356:32) > at Function.Module._load (module.js:312:12) > at Module.require (module.js:364:17) > at require (module.js:380:17) > at Object. > (/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node_modules/broccoli-merge-trees/index.js:2:18) > at Module._compile (module.js:456:26) > at Object.Module._extensions..js (module.js:474:10) > at Module.load (module.js:356:32) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4266) Allow whitelisted users to disable user re-mapping/squashing when launching docker containers
[ https://issues.apache.org/jira/browse/YARN-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894675#comment-15894675 ] Eric Badger commented on YARN-4266: --- [~tangzhankun], I tried out the most recent patch that you put up and I have a few comments. {noformat} + if (disableUserRemapping) { {noformat} {noformat} + if (disableUserRemapping && targetUID != null) { {noformat} - Shouldn't {{disableUserRemapping}} be negated in these statements? We want to do the remapping if {{disableUserRemapping}} is *not* disabled. {noformat} +containerPredefinedUser = getDockerImageInfo("{{.Config.User}}", imageName, containerIdStr); {noformat} - There is no check to see whether {{containerPredefinedUser}} actually got set to anything. It's possible for docker inspect to not return a predefined user. In this case, we will be unable to do remapping and the usermod command will fail because the user will be blank. {noformat} +//get runAsUser's UID for container to usermod when init +if (!containerPredefinedUser.equals("root")) { + targetUID = getLocalUid(runAsUser); +} {noformat} - I think checking {{containerPredefinedUser}} misses some cases here. You may still want the container to be run as a different user even if the predefined user is root. If we don't remap when the predefined user is root, then anything written out to shared data volumes will have messed up permissions outside of the container. {noformat} +String cmd = "\"usermod -o -u " + targetUID + " " + containerPredefinedUser ++ " && su " + containerPredefinedUser + " bash -c '" {noformat} - It's not guaranteed that the predefined user has /bin/bash shell permissions. So it may be prudent to add a {{-s /bin/bash}} to the usermod command. Making the above changes I've been able to successfully submit and run jobs as multiple different users without permissions issues. The only necessity seems to be that there be a predefined user in the image that is being used. Additionally, this usermod approach doesn't currently deal with group permissions at all, which could be an issue especially in multi-tenant clusters. > Allow whitelisted users to disable user re-mapping/squashing when launching > docker containers > - > > Key: YARN-4266 > URL: https://issues.apache.org/jira/browse/YARN-4266 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Sidharta Seethana >Assignee: Zhankun Tang > Attachments: YARN-4266.001.patch, > YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping.pdf, > YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v2.pdf, > YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v3.pdf, > YARN-4266-branch-2.8.001.patch > > > Docker provides a mechanism (the --user switch) that enables us to specify > the user the container processes should run as. We use this mechanism today > when launching docker containers . In non-secure mode, we run the docker > container based on > `yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user` and in > secure mode, as the submitting user. However, this mechanism breaks down with > a large number of 'pre-created' images which don't necessarily have the users > available within the image. Examples of such images include shared images > that need to be used by multiple users. We need a way in which we can allow a > pre-defined set of users to run containers based on existing images, without > using the --user switch. There are some implications of disabling this user > squashing that we'll need to work through : log aggregation, artifact > deletion etc., -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6256) Add FROM_ID info key for timeline entities in reader response.
[ https://issues.apache.org/jira/browse/YARN-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894659#comment-15894659 ] Rohith Sharma K S commented on YARN-6256: - Thanks for consensus. I will update patch fixing review comments from Varun! > Add FROM_ID info key for timeline entities in reader response. > --- > > Key: YARN-6256 > URL: https://issues.apache.org/jira/browse/YARN-6256 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6256-YARN-5355.0001.patch > > > It is continuation with YARN-6027 to add FROM_ID key in all other timeline > entity responses which includes > # Flow run entity response. > # Application entity response > # Generic timeline entity response - Here we need to retrospect on idprefix > filter which is now separately provided. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5956) Refactor ClientRMService
[ https://issues.apache.org/jira/browse/YARN-5956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894632#comment-15894632 ] Hadoop QA commented on YARN-5956: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 24s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 2 new + 57 unchanged - 5 fixed = 59 total (was 62) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 17s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 61m 53s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5956 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855876/YARN-5956.12.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux a3986906fa4e 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e58fc76 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15154/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/15154/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15154/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U:
[jira] [Comment Edited] (YARN-6256) Add FROM_ID info key for timeline entities in reader response.
[ https://issues.apache.org/jira/browse/YARN-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894621#comment-15894621 ] Sangjin Lee edited comment on YARN-6256 at 3/3/17 4:01 PM: --- Point taken. I know some entity ids and flow name etc. can get quite long, so it's pretty easy for such a from-id field to be in the hundreds of bytes. But again, if we return 100 entities, that would add only tens of kB. If we are going to keep it for all entities, we might as well keep it for single-entity queries too for consistency. was (Author: sjlee0): Point taken. I know some entity ids and flow name etc. can get quite long, so it's pretty easy for such a from-id field to be in the hundreds of bytes. But again, if we return 100 entities, that would add only tens of kB. > Add FROM_ID info key for timeline entities in reader response. > --- > > Key: YARN-6256 > URL: https://issues.apache.org/jira/browse/YARN-6256 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6256-YARN-5355.0001.patch > > > It is continuation with YARN-6027 to add FROM_ID key in all other timeline > entity responses which includes > # Flow run entity response. > # Application entity response > # Generic timeline entity response - Here we need to retrospect on idprefix > filter which is now separately provided. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6256) Add FROM_ID info key for timeline entities in reader response.
[ https://issues.apache.org/jira/browse/YARN-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894621#comment-15894621 ] Sangjin Lee commented on YARN-6256: --- Point taken. I know some entity ids and flow name etc. can get quite long, so it's pretty easy for such a from-id field to be in the hundreds of bytes. But again, if we return 100 entities, that would add only tens of kB. > Add FROM_ID info key for timeline entities in reader response. > --- > > Key: YARN-6256 > URL: https://issues.apache.org/jira/browse/YARN-6256 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6256-YARN-5355.0001.patch > > > It is continuation with YARN-6027 to add FROM_ID key in all other timeline > entity responses which includes > # Flow run entity response. > # Application entity response > # Generic timeline entity response - Here we need to retrospect on idprefix > filter which is now separately provided. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6256) Add FROM_ID info key for timeline entities in reader response.
[ https://issues.apache.org/jira/browse/YARN-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894580#comment-15894580 ] Varun Saxena commented on YARN-6256: Have similar thought as Rohith. I had thought about this too but went with sending it for every entity for the sake of consistency and the fact that even if we consider fromId value length on an average equal to 50-60 bytes and default 100 entities retrieved in a single call, say, payload size shouldn't be too much. > Add FROM_ID info key for timeline entities in reader response. > --- > > Key: YARN-6256 > URL: https://issues.apache.org/jira/browse/YARN-6256 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6256-YARN-5355.0001.patch > > > It is continuation with YARN-6027 to add FROM_ID key in all other timeline > entity responses which includes > # Flow run entity response. > # Application entity response > # Generic timeline entity response - Here we need to retrospect on idprefix > filter which is now separately provided. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5956) Refactor ClientRMService
[ https://issues.apache.org/jira/browse/YARN-5956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Sasaki updated YARN-5956: - Attachment: YARN-5956.12.patch > Refactor ClientRMService > > > Key: YARN-5956 > URL: https://issues.apache.org/jira/browse/YARN-5956 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Affects Versions: 3.0.0-alpha2 >Reporter: Kai Sasaki >Assignee: Kai Sasaki >Priority: Minor > Attachments: YARN-5956.01.patch, YARN-5956.02.patch, > YARN-5956.03.patch, YARN-5956.04.patch, YARN-5956.05.patch, > YARN-5956.06.patch, YARN-5956.07.patch, YARN-5956.08.patch, > YARN-5956.09.patch, YARN-5956.10.patch, YARN-5956.11.patch, YARN-5956.12.patch > > > Some refactoring can be done in {{ClientRMService}}. > - Remove redundant variable declaration > - Fill in missing javadocs > - Proper variable access modifier > - Fix some typos in method name and exception messages -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6263) NMTokenSecretManagerInRM.createAndGetNMToken is not thread safe
[ https://issues.apache.org/jira/browse/YARN-6263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894468#comment-15894468 ] Hudson commented on YARN-6263: -- FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #11340 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11340/]) YARN-6263. NMTokenSecretManagerInRM.createAndGetNMToken is not thread (jlowe: rev e58fc7603053e3ac1bc2464f9622995017db5245) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/NMTokenSecretManagerInRM.java > NMTokenSecretManagerInRM.createAndGetNMToken is not thread safe > --- > > Key: YARN-6263 > URL: https://issues.apache.org/jira/browse/YARN-6263 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 3.0.0-alpha2 >Reporter: Haibo Chen >Assignee: Haibo Chen > Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3 > > Attachments: YARN-6263.01.patch > > > NMTokenSecretManagerInRM.createAndGetNMToken modifies values of a > ConcurrentHashMap, which are of type HashSet, but it only acquires read lock. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6248) Killing an app with pending container requests leaves the user in UsersManager
[ https://issues.apache.org/jira/browse/YARN-6248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894464#comment-15894464 ] Eric Payne commented on YARN-6248: -- bq. yes. looks fine. if there are no other major concerns, i cud help to commit the same tomorrow. Thanks [~sunilg] > Killing an app with pending container requests leaves the user in UsersManager > -- > > Key: YARN-6248 > URL: https://issues.apache.org/jira/browse/YARN-6248 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: Eric Payne >Assignee: Eric Payne > Attachments: User Left Over.jpg, YARN-6248.001.patch > > > If an app is still asking for resources when it is killed, the user is left > in the UsersManager structure and shows up on the GUI. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6277) Nodemanager heap memory leak
[ https://issues.apache.org/jira/browse/YARN-6277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894451#comment-15894451 ] Naganarasimha G R commented on YARN-6277: - hi [~Feng Yuan], can you give more details about the issue , like heap dump and is this only in trunk ? > Nodemanager heap memory leak > > > Key: YARN-6277 > URL: https://issues.apache.org/jira/browse/YARN-6277 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 3.0.0-alpha2 >Reporter: Feng Yuan >Assignee: Feng Yuan > > Because LocalDirHandlerService@LocalDirAllocator`s mechanism,they will create > massive LocalFileSystem.So lead to heap leak. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6274) Documentation refers to incorrect nodemanager health checker interval property
[ https://issues.apache.org/jira/browse/YARN-6274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894427#comment-15894427 ] Weiwei Yang commented on YARN-6274: --- Thanks [~jlowe] for the quick review and commit! > Documentation refers to incorrect nodemanager health checker interval property > -- > > Key: YARN-6274 > URL: https://issues.apache.org/jira/browse/YARN-6274 > Project: Hadoop YARN > Issue Type: Task > Components: documentation >Affects Versions: 2.7.3 >Reporter: Charles Zhang >Assignee: Weiwei Yang >Priority: Trivial > Labels: beginner, documentation, easyfix > Fix For: 2.8.0, 2.7.4, 3.0.0-alpha3 > > Attachments: YARN-6274.01.patch > > > I think one parameter in the "Monitoring Health of NodeManagers" section of > "Cluster Setup" is wrong.The parameter > "yarn.nodemanager.health-checker.script.interval-ms" should be > “yarn.nodemanager.health-checker.interval-ms”.See > http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/ClusterSetup.html. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6274) Documentation refers to incorrect nodemanager health checker interval property
[ https://issues.apache.org/jira/browse/YARN-6274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894428#comment-15894428 ] Hudson commented on YARN-6274: -- FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #11339 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11339/]) YARN-6274. Documentation refers to incorrect nodemanager health checker (jlowe: rev 05237636d3a359dc502e8fa2d66bfdcea274f86c) * (edit) hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md > Documentation refers to incorrect nodemanager health checker interval property > -- > > Key: YARN-6274 > URL: https://issues.apache.org/jira/browse/YARN-6274 > Project: Hadoop YARN > Issue Type: Task > Components: documentation >Affects Versions: 2.7.3 >Reporter: Charles Zhang >Assignee: Weiwei Yang >Priority: Trivial > Labels: beginner, documentation, easyfix > Fix For: 2.8.0, 2.7.4, 3.0.0-alpha3 > > Attachments: YARN-6274.01.patch > > > I think one parameter in the "Monitoring Health of NodeManagers" section of > "Cluster Setup" is wrong.The parameter > "yarn.nodemanager.health-checker.script.interval-ms" should be > “yarn.nodemanager.health-checker.interval-ms”.See > http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/ClusterSetup.html. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6274) Documentation refers to incorrect nodemanager health checker interval property
[ https://issues.apache.org/jira/browse/YARN-6274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated YARN-6274: - Summary: Documentation refers to incorrect nodemanager health checker interval property (was: One error in the documentation of hadoop 2.7.3) > Documentation refers to incorrect nodemanager health checker interval property > -- > > Key: YARN-6274 > URL: https://issues.apache.org/jira/browse/YARN-6274 > Project: Hadoop YARN > Issue Type: Task > Components: documentation >Affects Versions: 2.7.3 >Reporter: Charles Zhang >Assignee: Weiwei Yang >Priority: Trivial > Labels: beginner, documentation, easyfix > Attachments: YARN-6274.01.patch > > > I think one parameter in the "Monitoring Health of NodeManagers" section of > "Cluster Setup" is wrong.The parameter > "yarn.nodemanager.health-checker.script.interval-ms" should be > “yarn.nodemanager.health-checker.interval-ms”.See > http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/ClusterSetup.html. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6274) One error in the documentation of hadoop 2.7.3
[ https://issues.apache.org/jira/browse/YARN-6274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated YARN-6274: - Fix Version/s: (was: 2.7.3) Thanks for the report, [~rebeyond1218] and for the patch, [~cheersyang]! Note that in the future the Fix Version should only be set by the committer when the patch is committed, as that field indicates what versions actually have the fix. The Target Version field should be used to indicate which versions are being targeted to fix. +1 patch lgtm. Committing this. > One error in the documentation of hadoop 2.7.3 > -- > > Key: YARN-6274 > URL: https://issues.apache.org/jira/browse/YARN-6274 > Project: Hadoop YARN > Issue Type: Task > Components: documentation >Affects Versions: 2.7.3 >Reporter: Charles Zhang >Assignee: Weiwei Yang >Priority: Trivial > Labels: beginner, documentation, easyfix > Attachments: YARN-6274.01.patch > > > I think one parameter in the "Monitoring Health of NodeManagers" section of > "Cluster Setup" is wrong.The parameter > "yarn.nodemanager.health-checker.script.interval-ms" should be > “yarn.nodemanager.health-checker.interval-ms”.See > http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/ClusterSetup.html. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6280) Add a query parameter in ResourceManager Cluster Applications REST API to control whether or not returns ResourceRequest
[ https://issues.apache.org/jira/browse/YARN-6280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894388#comment-15894388 ] Hadoop QA commented on YARN-6280: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 28s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 28s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 28s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 23s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 5 new + 105 unchanged - 3 fixed = 110 total (was 108) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 29s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 17s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 27s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 20s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6280 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855852/YARN-6280.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 4c060c50ea10 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / eb5a179 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | mvninstall | https://builds.apache.org/job/PreCommit-YARN-Build/15153/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | compile | https://builds.apache.org/job/PreCommit-YARN-Build/15153/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | javac |