[jira] [Commented] (YARN-7384) Remove apiserver cmd and merge service cmd into application cmd
[ https://issues.apache.org/jira/browse/YARN-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221751#comment-16221751 ] Hadoop QA commented on YARN-7384: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 28s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} yarn-native-services Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 44s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 45s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 21s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 22s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 43s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 22s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 26s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 41s{color} | {color:green} yarn-native-services passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 17s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 37s{color} | {color:orange} root: The patch generated 105 new + 179 unchanged - 309 fixed = 284 total (was 488) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 25s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 12s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 34s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 20s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 51s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} |
[jira] [Commented] (YARN-7289) Application lifetime does not work with FairScheduler
[ https://issues.apache.org/jira/browse/YARN-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221750#comment-16221750 ] Hadoop QA commented on YARN-7289: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 59s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 56m 1s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 17s{color} | {color:red} The patch generated 3 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 93m 22s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7289 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894175/YARN-7289.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 30bda41bfd62 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 36e158a | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/18180/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-YARN-Build/18180/artifact/patchprocess/patch-asflicense-problems.txt | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/18180/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Application lifetime does
[jira] [Commented] (YARN-7379) Moving logging APIs over to slf4j in hadoop-yarn-client
[ https://issues.apache.org/jira/browse/YARN-7379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221749#comment-16221749 ] Yeliang Cang commented on YARN-7379: [~ajisakaa], sorry for missing that, will submit a new patch soon! > Moving logging APIs over to slf4j in hadoop-yarn-client > --- > > Key: YARN-7379 > URL: https://issues.apache.org/jira/browse/YARN-7379 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Yeliang Cang >Assignee: Yeliang Cang > Attachments: YARN-7379.001.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7379) Moving logging APIs over to slf4j in hadoop-yarn-client
[ https://issues.apache.org/jira/browse/YARN-7379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221724#comment-16221724 ] Akira Ajisaka commented on YARN-7379: - Would you replace {{org.apache.log4j}} APIs to slf4j APIs in TestYarnClient.java as well? > Moving logging APIs over to slf4j in hadoop-yarn-client > --- > > Key: YARN-7379 > URL: https://issues.apache.org/jira/browse/YARN-7379 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Yeliang Cang >Assignee: Yeliang Cang > Attachments: YARN-7379.001.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7289) Application lifetime does not work with FairScheduler
[ https://issues.apache.org/jira/browse/YARN-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221695#comment-16221695 ] Rohith Sharma K S commented on YARN-7289: - Looks jenkins didn't run, kicking off manually > Application lifetime does not work with FairScheduler > - > > Key: YARN-7289 > URL: https://issues.apache.org/jira/browse/YARN-7289 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi > Attachments: YARN-7289.000.patch, YARN-7289.001.patch, > YARN-7289.002.patch, YARN-7289.003.patch, YARN-7289.004.patch, > YARN-7289.005.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7307) Allow client/AM update supported resource types via YARN APIs
[ https://issues.apache.org/jira/browse/YARN-7307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221670#comment-16221670 ] Hudson commented on YARN-7307: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13145 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13145/]) YARN-7307. Allow client/AM update supported resource types via YARN (wangda: rev 36e158ae98ef8b72a7a9f63102b714e025cafcc5) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMasterResponsePBImpl.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterResponse.java > Allow client/AM update supported resource types via YARN APIs > - > > Key: YARN-7307 > URL: https://issues.apache.org/jira/browse/YARN-7307 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Wangda Tan >Assignee: Sunil G >Priority: Blocker > Attachments: YARN-7307.001.patch, YARN-7307.002.patch, > YARN-7307.003.patch, YARN-7307.004.patch > > > Existing feature requires every client has a resource-types.xml in order to > use multiple resource types, should we allow client/AM update supported > resource types via Yarn APIs? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5516) Add REST API for supporting recurring reservations
[ https://issues.apache.org/jira/browse/YARN-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221668#comment-16221668 ] Subru Krishnan commented on YARN-5516: -- I cherry-picked it to branch-2 also. > Add REST API for supporting recurring reservations > -- > > Key: YARN-5516 > URL: https://issues.apache.org/jira/browse/YARN-5516 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Sangeetha Abdu Jyothi >Assignee: Sean Po > Fix For: 2.9.0, 3.0.0, 3.1.0 > > Attachments: YARN-5516.v001.patch, YARN-5516.v002.patch, > YARN-5516.v003.patch, YARN-5516.v004.patch, YARN-5516.v005.patch, > YARN-5516.v006.patch > > > YARN-5516 changing REST API of the reservation system to support periodicity. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5516) Add REST API for supporting recurring reservations
[ https://issues.apache.org/jira/browse/YARN-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated YARN-5516: - Fix Version/s: 2.9.0 > Add REST API for supporting recurring reservations > -- > > Key: YARN-5516 > URL: https://issues.apache.org/jira/browse/YARN-5516 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Sangeetha Abdu Jyothi >Assignee: Sean Po > Fix For: 2.9.0, 3.0.0, 3.1.0 > > Attachments: YARN-5516.v001.patch, YARN-5516.v002.patch, > YARN-5516.v003.patch, YARN-5516.v004.patch, YARN-5516.v005.patch, > YARN-5516.v006.patch > > > YARN-5516 changing REST API of the reservation system to support periodicity. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6929) yarn.nodemanager.remote-app-log-dir structure is not scalable
[ https://issues.apache.org/jira/browse/YARN-6929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221665#comment-16221665 ] Hadoop QA commented on YARN-6929: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 3m 50s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 51s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 2m 57s{color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 2m 57s{color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 18s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 180 unchanged - 32 fixed = 181 total (was 212) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 54s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 4m 8s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 34s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 54s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 19s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 88m 33s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-6929 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894253/YARN-6929.2.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 929085d754a5 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Commented] (YARN-6413) Yarn Registry FS implementation
[ https://issues.apache.org/jira/browse/YARN-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221661#comment-16221661 ] Hadoop QA commented on YARN-6413: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 16s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 9s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry: The patch generated 16 new + 15 unchanged - 0 fixed = 31 total (was 15) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 56s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 41s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 16s{color} | {color:red} The patch generated 3 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 34m 42s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-6413 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894267/YARN-6413.v3.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 80bee81a8d3c 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a25b5aa | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/18178/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/18178/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-YARN-Build/18178/artifact/patchprocess/patch-asflicense-problems.txt | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry | | Console output |
[jira] [Commented] (YARN-7403) Compute global and local preemption
[ https://issues.apache.org/jira/browse/YARN-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221655#comment-16221655 ] Hadoop QA commented on YARN-7403: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 8 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 23s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 27 new + 24 unchanged - 1 fixed = 51 total (was 25) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 53s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 16s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 9s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 99m 50s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | | Nullcheck of FedQueue.children at line 137 of value previously dereferenced in org.apache.hadoop.yarn.server.resourcemanager.federation.globalqueues.FedQueue.propagate(ResourceInfo) At FedQueue.java:137 of value previously dereferenced in org.apache.hadoop.yarn.server.resourcemanager.federation.globalqueues.FedQueue.propagate(ResourceInfo) At FedQueue.java:[line 130] | | | Write to static field org.apache.hadoop.yarn.server.resourcemanager.federation.globalqueues.FedQueue.jsonjaxbContext from instance method new org.apache.hadoop.yarn.server.resourcemanager.federation.globalqueues.FedQueue() At FedQueue.java:from instance method new
[jira] [Updated] (YARN-7307) Allow client/AM update supported resource types via YARN APIs
[ https://issues.apache.org/jira/browse/YARN-7307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-7307: - Summary: Allow client/AM update supported resource types via YARN APIs (was: Revisit resource-types.xml loading behaviors) > Allow client/AM update supported resource types via YARN APIs > - > > Key: YARN-7307 > URL: https://issues.apache.org/jira/browse/YARN-7307 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Wangda Tan >Assignee: Sunil G >Priority: Blocker > Attachments: YARN-7307.001.patch, YARN-7307.002.patch, > YARN-7307.003.patch, YARN-7307.004.patch > > > Existing feature requires every client has a resource-types.xml in order to > use multiple resource types, should we allow client/AM update supported > resource types via Yarn APIs? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7332) Compute effectiveCapacity per each resource vector
[ https://issues.apache.org/jira/browse/YARN-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221646#comment-16221646 ] Wangda Tan commented on YARN-7332: -- Retriggered Jenkins. > Compute effectiveCapacity per each resource vector > -- > > Key: YARN-7332 > URL: https://issues.apache.org/jira/browse/YARN-7332 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Affects Versions: YARN-5881 >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-7332.YARN-5881.001.patch, > YARN-7332.YARN-5881.002.patch, YARN-7332.YARN-5881.003.patch > > > Currently effective capacity uses a generalized approach based on dominance. > Hence some vectors may not be calculated correctly. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6413) Yarn Registry FS implementation
[ https://issues.apache.org/jira/browse/YARN-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221645#comment-16221645 ] Hadoop QA commented on YARN-6413: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 42s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 44s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 5m 13s{color} | {color:red} hadoop-yarn in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 52s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 12s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 54s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 16 new + 223 unchanged - 0 fixed = 239 total (was 223) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 10s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 35s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 46s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 29s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 64m 8s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-6413 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894260/YARN-6413.v2.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a269ccc37855 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b1de786 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_131 | | compile |
[jira] [Commented] (YARN-7390) All reservation related test cases failed when TestYarnClient runs against Fair Scheduler.
[ https://issues.apache.org/jira/browse/YARN-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221644#comment-16221644 ] Hadoop QA commented on YARN-7390: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 46s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 46s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 53s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 57s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 81 unchanged - 2 fixed = 82 total (was 83) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 56s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 27s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 15s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}146m 14s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7390 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894245/YARN-7390.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 68b793dda269 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 088ffee | | maven | version: Apache Maven 3.3.9 | | Default Java |
[jira] [Commented] (YARN-7394) Merge code paths for Reservation/Plan queues and Auto Created queues
[ https://issues.apache.org/jira/browse/YARN-7394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221638#comment-16221638 ] Hadoop QA commented on YARN-7394: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 21s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 24s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 60 new + 211 unchanged - 16 fixed = 271 total (was 227) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 35s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 19s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 52s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 89m 17s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7394 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894256/YARN-7394.2.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f4c69f8ba752 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b1de786 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | checkstyle |
[jira] [Updated] (YARN-6413) Yarn Registry FS implementation
[ https://issues.apache.org/jira/browse/YARN-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ellen Hui updated YARN-6413: Attachment: YARN-6413.v3.patch Remove configuration (should be added in a different patch, not being used here), add licenses > Yarn Registry FS implementation > --- > > Key: YARN-6413 > URL: https://issues.apache.org/jira/browse/YARN-6413 > Project: Hadoop YARN > Issue Type: Improvement > Components: amrmproxy, api, resourcemanager >Reporter: Ellen Hui >Assignee: Ellen Hui > Attachments: 0001-Registry-API-v2.patch, > 0001-YARN-6413-Yarn-Registry-FS-Implementation.patch, > 0002-Registry-API-v2.patch, 0003-Registry-API-api-only.patch, > 0004-Registry-API-api-stubbed.patch, YARN-6413.v1.patch, YARN-6413.v2.patch, > YARN-6413.v3.patch > > > Add a RegistryOperations implementation that writes records to the file > system. This does not include any changes to the API, to avoid compatibility > issues. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7384) Remove apiserver cmd and merge service cmd into application cmd
[ https://issues.apache.org/jira/browse/YARN-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Billie Rinaldi updated YARN-7384: - Attachment: YARN-7384-yarn-native-services.003.patch > Remove apiserver cmd and merge service cmd into application cmd > --- > > Key: YARN-7384 > URL: https://issues.apache.org/jira/browse/YARN-7384 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Billie Rinaldi > Attachments: YARN-7384-yarn-native-services.001.patch, > YARN-7384-yarn-native-services.002.patch, > YARN-7384-yarn-native-services.003.patch > > > As per discussion on YARN-7326. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6704) Add Federation Interceptor restart when work preserving NM is enabled
[ https://issues.apache.org/jira/browse/YARN-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221586#comment-16221586 ] Hadoop QA commented on YARN-6704: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} YARN-6704 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | YARN-6704 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12881555/YARN-6704.v4.patch | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/18175/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add Federation Interceptor restart when work preserving NM is enabled > - > > Key: YARN-6704 > URL: https://issues.apache.org/jira/browse/YARN-6704 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang > Attachments: YARN-6704-YARN-2915.v1.patch, > YARN-6704-YARN-2915.v2.patch, YARN-6704.v3.patch, YARN-6704.v4.patch > > > YARN-1336 added the ability to restart NM without loosing any running > containers. {{AMRMProxy}} restart is added in YARN-6127. In a Federated YARN > environment, there's additional state in the {{FederationInterceptor}} to > allow for spanning across multiple sub-clusters, so we need to enhance > {{FederationInterceptor}} to support work-preserving restart. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-7276) Federation Router Web Service fixes
[ https://issues.apache.org/jira/browse/YARN-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221574#comment-16221574 ] Íñigo Goiri edited comment on YARN-7276 at 10/27/17 1:50 AM: - Not sure if the timeouts for those unit tests is normal or there is an actual issue. Here is the result: https://builds.apache.org/job/PreCommit-YARN-Build/18169/testReport/org.apache.hadoop.yarn.server.router.webapp/TestRouterWebServicesREST/ It seems like most of them are taking a long time. It didn't happen in the previous patches so it might be related to the Hadoop machines being overloaded. Shall we increase the timeouts to prevent these issues. was (Author: elgoiri): Not sure if the timeouts for those unit tests is normal or there is an actual issue. It didn't happen in the previous patches... > Federation Router Web Service fixes > --- > > Key: YARN-7276 > URL: https://issues.apache.org/jira/browse/YARN-7276 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Attachments: YARN-7276-branch-2.000.patch, > YARN-7276-branch-2.001.patch, YARN-7276-branch-2.002.patch, > YARN-7276-branch-2.003.patch, YARN-7276-branch-2.004.patch, > YARN-7276.000.patch, YARN-7276.001.patch, YARN-7276.002.patch, > YARN-7276.003.patch, YARN-7276.004.patch, YARN-7276.005.patch, > YARN-7276.006.patch, YARN-7276.007.patch, YARN-7276.009.patch, > YARN-7276.010.patch, YARN-7276.011.patch, YARN-7276.012.patch, > YARN-7276.013.patch > > > While testing YARN-3661, I found a few issues with the REST interface in the > Router: > * No support for empty content (error 204) > * Media type support > * Attributes in {{FederationInterceptorREST}} > * Support for empty states and labels > * DefaultMetricsSystem initialization is missing -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7276) Federation Router Web Service fixes
[ https://issues.apache.org/jira/browse/YARN-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221574#comment-16221574 ] Íñigo Goiri commented on YARN-7276: --- Not sure if the timeouts for those unit tests is normal or there is an actual issue. It didn't happen in the previous patches... > Federation Router Web Service fixes > --- > > Key: YARN-7276 > URL: https://issues.apache.org/jira/browse/YARN-7276 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Attachments: YARN-7276-branch-2.000.patch, > YARN-7276-branch-2.001.patch, YARN-7276-branch-2.002.patch, > YARN-7276-branch-2.003.patch, YARN-7276-branch-2.004.patch, > YARN-7276.000.patch, YARN-7276.001.patch, YARN-7276.002.patch, > YARN-7276.003.patch, YARN-7276.004.patch, YARN-7276.005.patch, > YARN-7276.006.patch, YARN-7276.007.patch, YARN-7276.009.patch, > YARN-7276.010.patch, YARN-7276.011.patch, YARN-7276.012.patch, > YARN-7276.013.patch > > > While testing YARN-3661, I found a few issues with the REST interface in the > Router: > * No support for empty content (error 204) > * Media type support > * Attributes in {{FederationInterceptorREST}} > * Support for empty states and labels > * DefaultMetricsSystem initialization is missing -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7262) Add a hierarchy into the ZKRMStateStore for delegation token znodes to prevent jute buffer overflow
[ https://issues.apache.org/jira/browse/YARN-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221568#comment-16221568 ] Hudson commented on YARN-7262: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13142 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13142/]) YARN-7262. Add a hierarchy into the ZKRMStateStore for delegation token (rkanter: rev b1de78619f3e5e25d6f9d5eaf41925f22d212fb9) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStore.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java > Add a hierarchy into the ZKRMStateStore for delegation token znodes to > prevent jute buffer overflow > --- > > Key: YARN-7262 > URL: https://issues.apache.org/jira/browse/YARN-7262 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: Robert Kanter >Assignee: Robert Kanter > Fix For: 2.9.0, 3.0.0 > > Attachments: YARN-7262.001.patch, YARN-7262.002.patch, > YARN-7262.003.patch, YARN-7262.003.patch > > > We've seen users who are running into a problem where the RM is storing so > many delegation tokens in the {{ZKRMStateStore}} that the _listing_ of those > znodes is higher than the jute buffer. This is fine during operations, but > becomes a problem on a fail over because the RM will try to read in all of > the token znodes (i.e. call {{getChildren}} on the parent znode). This is > particularly bad because everything appears to be okay, but then if a > failover occurs you end up with no active RMs. > There was a similar problem with the Yarn application data that was fixed in > YARN-2962 by adding a (configurable) hierarchy of znodes so the RM could pull > subchildren without overflowing the jute buffer (though it's off by default). > We should add a hierarchy similar to that of YARN-2962, but for the > delegation token znodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6413) Yarn Registry FS implementation
[ https://issues.apache.org/jira/browse/YARN-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ellen Hui updated YARN-6413: Attachment: YARN-6413.v2.patch Fix findbugs error. > Yarn Registry FS implementation > --- > > Key: YARN-6413 > URL: https://issues.apache.org/jira/browse/YARN-6413 > Project: Hadoop YARN > Issue Type: Improvement > Components: amrmproxy, api, resourcemanager >Reporter: Ellen Hui >Assignee: Ellen Hui > Attachments: 0001-Registry-API-v2.patch, > 0001-YARN-6413-Yarn-Registry-FS-Implementation.patch, > 0002-Registry-API-v2.patch, 0003-Registry-API-api-only.patch, > 0004-Registry-API-api-stubbed.patch, YARN-6413.v1.patch, YARN-6413.v2.patch > > > Add a RegistryOperations implementation that writes records to the file > system. This does not include any changes to the API, to avoid compatibility > issues. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6413) Yarn Registry FS implementation
[ https://issues.apache.org/jira/browse/YARN-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221556#comment-16221556 ] Hadoop QA commented on YARN-6413: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 43s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 47s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 29s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 58s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 16 new + 223 unchanged - 0 fixed = 239 total (was 223) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 22s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 46s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 38s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 28s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 68m 18s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry | | | org.apache.hadoop.registry.client.impl.RegistryOperationsStoreService.resolve(String) ignores result of org.apache.hadoop.fs.FSDataInputStream.read(byte[]) At RegistryOperationsStoreService.java: At RegistryOperationsStoreService.java:[line 147] | | Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-6413 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894239/YARN-6413.v1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient
[jira] [Commented] (YARN-7276) Federation Router Web Service fixes
[ https://issues.apache.org/jira/browse/YARN-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221553#comment-16221553 ] Hadoop QA commented on YARN-7276: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 40s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 5s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 1s{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 57m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.router.webapp.TestRouterWebServicesREST | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7276 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894233/YARN-7276.013.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 63cf0c70dcc2 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 088ffee | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/18169/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/18169/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/18169/console
[jira] [Updated] (YARN-7394) Merge code paths for Reservation/Plan queues and Auto Created queues
[ https://issues.apache.org/jira/browse/YARN-7394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suma Shivaprasad updated YARN-7394: --- Attachment: YARN-7394.2.patch Fixed UTs and file renaming > Merge code paths for Reservation/Plan queues and Auto Created queues > > > Key: YARN-7394 > URL: https://issues.apache.org/jira/browse/YARN-7394 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Reporter: Suma Shivaprasad >Assignee: Suma Shivaprasad > Attachments: YARN-7394.1.patch, YARN-7394.2.patch, YARN-7394.patch > > > The initialization/reinitialization logic for ReservationQueue and > AutoCreated Leaf queues are similar. The proposal is to rename > ReservationQueue to a more generic name AutoCreatedLeafQueue which are either > managed by PlanQueue(already exists) or AutoCreateEnabledParentQueue (new > class). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7146) Many RM unit tests failing with FairScheduler
[ https://issues.apache.org/jira/browse/YARN-7146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221536#comment-16221536 ] Hadoop QA commented on YARN-7146: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 17 new or modified test files. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 7s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 37s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 11s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 4s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 55s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 20 new + 460 unchanged - 5 fixed = 480 total (was 465) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 36s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 34s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 49m 11s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 43s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 0s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}188m 57s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.client.TestFederationRMFailoverProxyProvider | | Timed out junit tests | org.apache.hadoop.yarn.client.TestRMFailover | | | org.apache.hadoop.yarn.client.cli.TestYarnCLI | | | org.apache.hadoop.yarn.client.TestApplicationMasterServiceProtocolOnHA | | | org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA | | | org.apache.hadoop.yarn.client.api.impl.TestYarnClient | | | org.apache.hadoop.yarn.client.api.impl.TestAMRMClient | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:17213a0 | | JIRA Issue | YARN-7146 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894211/YARN-7146.004.branch-2.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 12fdffa3e4a5 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Commented] (YARN-7378) Documentation changes post branch-2 merge
[ https://issues.apache.org/jira/browse/YARN-7378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221535#comment-16221535 ] Hadoop QA commented on YARN-7378: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 1s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 9m 40s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:17213a0 | | JIRA Issue | YARN-7378 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894242/YARN-7378-branch-2.0001.patch | | Optional Tests | asflicense mvnsite | | uname | Linux b72e24e924fa 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | branch-2 / 952aa3f | | maven | version: Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/18171/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Documentation changes post branch-2 merge > - > > Key: YARN-7378 > URL: https://issues.apache.org/jira/browse/YARN-7378 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineclient, timelinereader, timelineserver >Reporter: Varun Saxena >Assignee: Vrushali C > Attachments: YARN-7378-branch-2.0001.patch, schema creation > documentation.png > > > Need to update the documentation for the schema creator command. It should > include the timeline-service-hbase jar as well as hbase-server jar in > classpath when the command is to be run. Due to YARN-7190 classpath changes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7332) Compute effectiveCapacity per each resource vector
[ https://issues.apache.org/jira/browse/YARN-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221532#comment-16221532 ] Hadoop QA commented on YARN-7332: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 7m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} YARN-5881 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 22s{color} | {color:green} YARN-5881 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} YARN-5881 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} YARN-5881 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s{color} | {color:green} YARN-5881 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 55s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 12s{color} | {color:green} YARN-5881 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} YARN-5881 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 28s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 10 new + 100 unchanged - 1 fixed = 110 total (was 101) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 13s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 20s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 93m 59s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler | | | hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter | | | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppAttempt | | Timed out junit tests | org.apache.hadoop.yarn.server.resourcemanager.reservation.planning.TestReservationAgents | | | org.apache.hadoop.yarn.server.resourcemanager.TestRMHA | | | org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore | | | org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore | | | org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterService | | | org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification | | | org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService | | | org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesReservation | | | org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.TestProportionalCapacityPreemptionPolicy | | |
[jira] [Commented] (YARN-7224) Support GPU isolation for docker container
[ https://issues.apache.org/jira/browse/YARN-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221526#comment-16221526 ] Hadoop QA commented on YARN-7224: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 41s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 13 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 42s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 35s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 52s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 39 new + 388 unchanged - 11 fixed = 427 total (was 399) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 0s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 50s{color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 34s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 36s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 0s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not
[jira] [Assigned] (YARN-6128) Add support for AMRMProxy HA
[ https://issues.apache.org/jira/browse/YARN-6128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Botong Huang reassigned YARN-6128: -- Assignee: Botong Huang (was: Subru Krishnan) > Add support for AMRMProxy HA > > > Key: YARN-6128 > URL: https://issues.apache.org/jira/browse/YARN-6128 > Project: Hadoop YARN > Issue Type: Sub-task > Components: amrmproxy, nodemanager >Reporter: Subru Krishnan >Assignee: Botong Huang > > YARN-556 added the ability for RM failover without loosing any running > applications. In a Federated YARN environment, there's additional state in > the {{AMRMProxy}} to allow for spanning across multiple sub-clusters, so we > need to enhance {{AMRMProxy}} to support HA. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6929) yarn.nodemanager.remote-app-log-dir structure is not scalable
[ https://issues.apache.org/jira/browse/YARN-6929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prabhu Joseph updated YARN-6929: Attachment: YARN-6929.2.patch > yarn.nodemanager.remote-app-log-dir structure is not scalable > - > > Key: YARN-6929 > URL: https://issues.apache.org/jira/browse/YARN-6929 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation >Affects Versions: 2.7.3 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph > Attachments: YARN-6929.1.patch, YARN-6929.2.patch, YARN-6929.2.patch, > YARN-6929.patch > > > The current directory structure for yarn.nodemanager.remote-app-log-dir is > not scalable. Maximum Subdirectory limit by default is 1048576 (HDFS-6102). > With retention yarn.log-aggregation.retain-seconds of 7days, there are more > chances LogAggregationService fails to create a new directory with > FSLimitException$MaxDirectoryItemsExceededException. > The current structure is > //logs/. This can be > improved with adding date as a subdirectory like > //logs// > {code} > WARN > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService: > Application failed to init aggregation > org.apache.hadoop.yarn.exceptions.YarnRuntimeException: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException): > The directory item limit of /app-logs/yarn/logs is exceeded: limit=1048576 > items=1048576 > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyMaxDirItems(FSDirectory.java:2021) > > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:2072) > > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedMkdir(FSDirectory.java:1841) > > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsRecursively(FSNamesystem.java:4351) > > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4262) > > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4221) > > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4194) > > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813) > > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600) > > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) > > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.createAppDir(LogAggregationService.java:308) > > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.initAppAggregator(LogAggregationService.java:366) > > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.initApp(LogAggregationService.java:320) > > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.handle(LogAggregationService.java:443) > > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.handle(LogAggregationService.java:67) > > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) > > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) > at java.lang.Thread.run(Thread.java:745) > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException): > The directory item limit of /app-logs/yarn/logs is exceeded: limit=1048576 > items=1048576 > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyMaxDirItems(FSDirectory.java:2021) > > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:2072) > > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedMkdir(FSDirectory.java:1841) > > at >
[jira] [Commented] (YARN-7320) Duplicate LiteralByteStrings in SystemCredentialsForAppsProto.credentialsForApp_
[ https://issues.apache.org/jira/browse/YARN-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221510#comment-16221510 ] Hudson commented on YARN-7320: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13141 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13141/]) YARN-7320. Duplicate LiteralByteStrings in (rkanter: rev 088ffee7165d0e2e4fb9af7fb8f33626b0ed8ed3) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/NodeHeartbeatResponsePBImpl.java > Duplicate LiteralByteStrings in > SystemCredentialsForAppsProto.credentialsForApp_ > > > Key: YARN-7320 > URL: https://issues.apache.org/jira/browse/YARN-7320 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev > Fix For: 3.0.0 > > Attachments: YARN-7320.01.addendum.patch, YARN-7320.01.patch, > YARN-7320.02.patch > > > Using jxray (www.jxray.com) I've analyzed several heap dumps from YARN > Resource Manager running in a big cluster. The tool uncovered several sources > of memory waste. One problem, which results in wasting more than a quarter of > all memory, is a large number of duplicate {{LiteralByteString}} objects > coming from the following reference chain: > {code} > 1,011,810K (26.9%): byte[]: 5416705 / 100% dup arrays (22108 unique) > ↖com.google.protobuf.LiteralByteString.bytes > ↖org.apache.hadoop.yarn.proto.YarnServerCommonServiceProtos$.credentialsForApp_ > ↖{j.u.ArrayList} > ↖j.u.Collections$UnmodifiableRandomAccessList.c > ↖org.apache.hadoop.yarn.proto.YarnServerCommonServiceProtos$NodeHeartbeatResponseProto.systemCredentialsForApps_ > ↖org.apache.hadoop.yarn.server.api.protocolrecords.impl.pb.NodeHeartbeatResponsePBImpl.proto > ↖org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl.latestNodeHeartBeatResponse > ↖org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerNode.rmNode > ... > {code} > That is, collectively reference chains that look as above hold in memory 5.4 > million {{LiteralByteString}} objects, but only ~22 thousand of these objects > are unique. Deduplicating these objects, e.g. using a Google Object Interner > instance, would save ~1GB of memory. > It looks like the main place where the above {{LiteralByteString}}s are > created and attached to the {{SystemCredentialsForAppsProto}} objects is in > {{NodeHeartbeatResponsePBImpl.java}}, method > {{addSystemCredentialsToProto()}}. Probably adding a call to an interner > there will fix the problem. wi -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7403) Compute global and local preemption
[ https://issues.apache.org/jira/browse/YARN-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-7403: --- Attachment: YARN-7403.draft2.patch Fixing ASF license. > Compute global and local preemption > --- > > Key: YARN-7403 > URL: https://issues.apache.org/jira/browse/YARN-7403 > Project: Hadoop YARN > Issue Type: Sub-task > Components: federation >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-7403.draft.patch, YARN-7403.draft2.patch > > > This JIRA tracks algorithmic effort to combine the local queue views of > capacity guarantee/use/demand and compute the global amount of preemption, > and based on that, "where" (in which sub-cluster) preemption will be enacted. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7405) Bias container allocations based on global view
Carlo Curino created YARN-7405: -- Summary: Bias container allocations based on global view Key: YARN-7405 URL: https://issues.apache.org/jira/browse/YARN-7405 Project: Hadoop YARN Issue Type: Sub-task Reporter: Carlo Curino Each RM in a federation should bias its local allocations of containers based on the global over/under utilization of queues. As part of this the local RM should account for the work that other RMs will be doing in between the updates we receive via the heartbeats of YARN-7404 (the mechanics used for synchronization). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7402) Federation: Global Queues
[ https://issues.apache.org/jira/browse/YARN-7402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221498#comment-16221498 ] Carlo Curino commented on YARN-7402: This started with conversations with Bill Ramsey, [~roniburd], [~subru], [~asuresh], [~kkaranasos] and [~chris.douglas]. The goal is to extend YARN ability to enforce global invariant across a federated cluster, while retaining the scalability of federation. For this purpose the sharing of information among sub-cluster is on heartbeats and limited to very summarized view of the world (queue-level aggregates only). > Federation: Global Queues > - > > Key: YARN-7402 > URL: https://issues.apache.org/jira/browse/YARN-7402 > Project: Hadoop YARN > Issue Type: New Feature > Components: federation >Reporter: Carlo Curino >Assignee: Carlo Curino > > YARN Federation today requires manual configuration of queues within each > sub-cluster, and each RM operates "in isolation". This has few issues: > # Preemption is computed locally (and might far exceed the global need) > # Jobs within a queue are forced to consume their resources "evenly" based on > queue mapping > This umbrella JIRA tracks a new feature that leverages the > FederationStateStore as a synchronization mechanism among RMs, and allows for > allocation and preemption decisions to be based on a (close to up-to-date) > global view of the cluster allocation and demand. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-7403) Compute global and local preemption
[ https://issues.apache.org/jira/browse/YARN-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221493#comment-16221493 ] Carlo Curino edited comment on YARN-7403 at 10/27/17 12:13 AM: --- Draft patch re-using the {{PreemptableResourceCalculator}} to compute global preemption, and simple demand-based algorithm to "split" the global preemption to local preemptions. was (Author: curino): Draft patch re-using the PreemptatbleResourceCalculator to compute global preemption, and simple demand-based algorithm to "split" the global preemption to local preemptions. > Compute global and local preemption > --- > > Key: YARN-7403 > URL: https://issues.apache.org/jira/browse/YARN-7403 > Project: Hadoop YARN > Issue Type: Sub-task > Components: federation >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-7403.draft.patch > > > This JIRA tracks algorithmic effort to combine the local queue views of > capacity guarantee/use/demand and compute the global amount of preemption, > and based on that, "where" (in which sub-cluster) preemption will be enacted. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7403) Compute global and local preemption
[ https://issues.apache.org/jira/browse/YARN-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-7403: --- Attachment: YARN-7403.draft.patch Draft patch re-using the PreemptatbleResourceCalculator to compute global preemption, and simple demand-based algorithm to "split" the global preemption to local preemptions. > Compute global and local preemption > --- > > Key: YARN-7403 > URL: https://issues.apache.org/jira/browse/YARN-7403 > Project: Hadoop YARN > Issue Type: Sub-task > Components: federation >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-7403.draft.patch > > > This JIRA tracks algorithmic effort to combine the local queue views of > capacity guarantee/use/demand and compute the global amount of preemption, > and based on that, "where" (in which sub-cluster) preemption will be enacted. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7178) Add documentation for Container Update API
[ https://issues.apache.org/jira/browse/YARN-7178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-7178: -- Attachment: YARN-7178.001.patch Attaching initial patch. > Add documentation for Container Update API > -- > > Key: YARN-7178 > URL: https://issues.apache.org/jira/browse/YARN-7178 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: Arun Suresh >Priority: Blocker > Attachments: YARN-7178.001.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-7403) Compute global and local preemption
[ https://issues.apache.org/jira/browse/YARN-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino reassigned YARN-7403: -- Assignee: Carlo Curino > Compute global and local preemption > --- > > Key: YARN-7403 > URL: https://issues.apache.org/jira/browse/YARN-7403 > Project: Hadoop YARN > Issue Type: Sub-task > Components: federation >Reporter: Carlo Curino >Assignee: Carlo Curino > > This JIRA tracks algorithmic effort to combine the local queue views of > capacity guarantee/use/demand and compute the global amount of preemption, > and based on that, "where" (in which sub-cluster) preemption will be enacted. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7404) RM federation heartbeat to StateStore must include "queue state"
Carlo Curino created YARN-7404: -- Summary: RM federation heartbeat to StateStore must include "queue state" Key: YARN-7404 URL: https://issues.apache.org/jira/browse/YARN-7404 Project: Hadoop YARN Issue Type: Sub-task Reporter: Carlo Curino -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7403) Compute global and local preemption
Carlo Curino created YARN-7403: -- Summary: Compute global and local preemption Key: YARN-7403 URL: https://issues.apache.org/jira/browse/YARN-7403 Project: Hadoop YARN Issue Type: Sub-task Components: federation Reporter: Carlo Curino This JIRA tracks algorithmic effort to combine the local queue views of capacity guarantee/use/demand and compute the global amount of preemption, and based on that, "where" (in which sub-cluster) preemption will be enacted. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-7402) Federation: Global Queues
[ https://issues.apache.org/jira/browse/YARN-7402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino reassigned YARN-7402: -- Assignee: Carlo Curino > Federation: Global Queues > - > > Key: YARN-7402 > URL: https://issues.apache.org/jira/browse/YARN-7402 > Project: Hadoop YARN > Issue Type: New Feature > Components: federation >Reporter: Carlo Curino >Assignee: Carlo Curino > > YARN Federation today requires manual configuration of queues within each > sub-cluster, and each RM operates "in isolation". This has few issues: > # Preemption is computed locally (and might far exceed the global need) > # Jobs within a queue are forced to consume their resources "evenly" based on > queue mapping > This umbrella JIRA tracks a new feature that leverages the > FederationStateStore as a synchronization mechanism among RMs, and allows for > allocation and preemption decisions to be based on a (close to up-to-date) > global view of the cluster allocation and demand. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7402) Federation: Global Queues
Carlo Curino created YARN-7402: -- Summary: Federation: Global Queues Key: YARN-7402 URL: https://issues.apache.org/jira/browse/YARN-7402 Project: Hadoop YARN Issue Type: New Feature Components: federation Reporter: Carlo Curino YARN Federation today requires manual configuration of queues within each sub-cluster, and each RM operates "in isolation". This has few issues: # Preemption is computed locally (and might far exceed the global need) # Jobs within a queue are forced to consume their resources "evenly" based on queue mapping This umbrella JIRA tracks a new feature that leverages the FederationStateStore as a synchronization mechanism among RMs, and allows for allocation and preemption decisions to be based on a (close to up-to-date) global view of the cluster allocation and demand. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5447) Consider including allocationRequestId in NMContainerStatus to allow recovery in case of RM failover
[ https://issues.apache.org/jira/browse/YARN-5447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221447#comment-16221447 ] Arun Suresh commented on YARN-5447: --- Just noticed your comment [~jianhe]. I don't think I understand your concern. Let us try to distinguish between 2 failover scenarios: # If the RM failsover, assuming the AM is still running when the new RM comes up, the AM will re-register and in the response, it will be notified by the RM all it currently running containers. The RM recreates this list from the NMContainerStatus it receives from the NM heartbeats. Since the AM keeps the mapping between allocationReqId and containerType/role in memory I am guessing we are fine. # If the AM failsover, we will get a new app attempt and this new app attempt will receive all the previous attempts running containers on registration. In this case, the mapping might be lost, if the AM had not persisted it somewhere. This JIRA was to track case 1, we can expand the scope to solving case 2. > Consider including allocationRequestId in NMContainerStatus to allow recovery > in case of RM failover > > > Key: YARN-5447 > URL: https://issues.apache.org/jira/browse/YARN-5447 > Project: Hadoop YARN > Issue Type: Sub-task > Components: applications, resourcemanager >Reporter: Subru Krishnan >Assignee: Subru Krishnan > > We have added a mapping of the allocated container to the original request > through YARN-4887/YARN-4888. There is a corner case in which the mapping will > be lost, i.e. if RM fails over before notifying the AM about newly allocated > container(s). This JIRA tracks the changes required to include the > allocationRequestId in NMContainerStatus to allow recovery in case of RM > failover. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7178) Add documentation for Container Update API
[ https://issues.apache.org/jira/browse/YARN-7178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221440#comment-16221440 ] Andrew Wang commented on YARN-7178: --- [~asuresh] ping again? > Add documentation for Container Update API > -- > > Key: YARN-7178 > URL: https://issues.apache.org/jira/browse/YARN-7178 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: Arun Suresh >Priority: Blocker > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-7397) Reduce lock contention in FairScheduler#getAppWeight()
[ https://issues.apache.org/jira/browse/YARN-7397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221247#comment-16221247 ] Yufei Gu edited comment on YARN-7397 at 10/26/17 11:23 PM: --- Nice finding! Thanks for working on this. The patch looks good to me. I was wandering {{getAppWeight()}} isn't necessary to be in class FairScheduler, {{FSAppAttempt}} seems a good place to hold that, but it may involve some lock rewriting. I don't think it is worth to do in this jira if it is too complicated. +1. was (Author: yufeigu): Nice finding! Thanks for working on this. The patch looks good to me. I was wandering {{getAppWeight()}} isn't necessary to be in class FairScheduler, {{FSAppAttempt}} seems a good place to hold that, but it may involve some lock rewriting. I don't think it is worth to do in this jira if it is two complicated. > Reduce lock contention in FairScheduler#getAppWeight() > -- > > Key: YARN-7397 > URL: https://issues.apache.org/jira/browse/YARN-7397 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Affects Versions: 3.0.0-beta1 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-7397.001.patch > > > In profiling the fair scheduler, a large amount of time is spent waiting to > get the lock in {{FairScheduler.getAppWeight()}}, when the lock isn't > actually needed. This patch reduces the scope of the lock to eliminate that > contention. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7394) Merge code paths for Reservation/Plan queues and Auto Created queues
[ https://issues.apache.org/jira/browse/YARN-7394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221434#comment-16221434 ] Hadoop QA commented on YARN-7394: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 45s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 28s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 28s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 28s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 27s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 51 new + 212 unchanged - 14 fixed = 263 total (was 226) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 29s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 3m 13s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 25s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 21s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 31s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 30m 54s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7394 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894223/YARN-7394.1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 1d69314c8bed 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 25932da | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/PreCommit-YARN-Build/18166/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | compile |
[jira] [Commented] (YARN-7384) Remove apiserver cmd and merge service cmd into application cmd
[ https://issues.apache.org/jira/browse/YARN-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221429#comment-16221429 ] Hadoop QA commented on YARN-7384: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m 4s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} yarn-native-services Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 38s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 54s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 52s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 18s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 47s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 34s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 29s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 37s{color} | {color:green} yarn-native-services passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 25s{color} | {color:green} root: The patch generated 0 new + 73 unchanged - 276 fixed = 73 total (was 349) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 24s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 10s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 33s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site . {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 59s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}194m 18s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} |
[jira] [Updated] (YARN-7390) All reservation related test cases failed when TestYarnClient runs against Fair Scheduler.
[ https://issues.apache.org/jira/browse/YARN-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-7390: --- Attachment: YARN-7390.002.patch Patch v2 is a parameterized version. > All reservation related test cases failed when TestYarnClient runs against > Fair Scheduler. > -- > > Key: YARN-7390 > URL: https://issues.apache.org/jira/browse/YARN-7390 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler, reservation system >Affects Versions: 2.9.0, 3.0.0, 3.1.0 >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-7390.001.patch, YARN-7390.002.patch > > > All reservation related test cases failed when {{TestYarnClient}} runs > against Fair Scheduler. To reproduce it, you need to set scheduler class to > Fair Scheduler in yarn-default.xml. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7378) Documentation changes post branch-2 merge
[ https://issues.apache.org/jira/browse/YARN-7378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vrushali C updated YARN-7378: - Target Version/s: 2.9.0 > Documentation changes post branch-2 merge > - > > Key: YARN-7378 > URL: https://issues.apache.org/jira/browse/YARN-7378 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineclient, timelinereader, timelineserver >Reporter: Varun Saxena >Assignee: Vrushali C > Attachments: YARN-7378-branch-2.0001.patch, schema creation > documentation.png > > > Need to update the documentation for the schema creator command. It should > include the timeline-service-hbase jar as well as hbase-server jar in > classpath when the command is to be run. Due to YARN-7190 classpath changes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7378) Documentation changes post branch-2 merge
[ https://issues.apache.org/jira/browse/YARN-7378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vrushali C updated YARN-7378: - Attachment: YARN-7378-branch-2.0001.patch schema creation documentation.png Uploading patch 0001. Also attaching screen shot showing the documentation updates. > Documentation changes post branch-2 merge > - > > Key: YARN-7378 > URL: https://issues.apache.org/jira/browse/YARN-7378 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineclient, timelinereader, timelineserver >Reporter: Varun Saxena >Assignee: Vrushali C > Attachments: YARN-7378-branch-2.0001.patch, schema creation > documentation.png > > > Need to update the documentation for the schema creator command. It should > include the timeline-service-hbase jar as well as hbase-server jar in > classpath when the command is to be run. Due to YARN-7190 classpath changes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7197) Add support for a volume blacklist for docker containers
[ https://issues.apache.org/jira/browse/YARN-7197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221402#comment-16221402 ] Eric Yang commented on YARN-7197: - [~ebadger] Any other concern about this feature? I will commit this, if there is no other objection. > Add support for a volume blacklist for docker containers > > > Key: YARN-7197 > URL: https://issues.apache.org/jira/browse/YARN-7197 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Shane Kumpf >Assignee: Eric Yang > Attachments: YARN-7197.001.patch, YARN-7197.002.patch > > > Docker supports bind mounting host directories into containers. Work is > underway to allow admins to configure a whilelist of volume mounts. While > this is a much needed and useful feature, it opens the door for > misconfiguration that may lead to users being able to compromise or crash the > system. > One example would be allowing users to mount /run from a host running > systemd, and then running systemd in that container, rendering the host > mostly unusable. > This issue is to add support for a default blacklist. The default blacklist > would be where we put files and directories that if mounted into a container, > are likely to have negative consequences. Users are encouraged not to remove > items from the default blacklist, but may do so if necessary. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6413) Yarn Registry FS implementation
[ https://issues.apache.org/jira/browse/YARN-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ellen Hui updated YARN-6413: Description: Add a RegistryOperations implementation that writes records to the file system. This does not include any changes to the API, to avoid compatibility issues. (was: Add a RegistryOperations implementation that writes records to the file system.) > Yarn Registry FS implementation > --- > > Key: YARN-6413 > URL: https://issues.apache.org/jira/browse/YARN-6413 > Project: Hadoop YARN > Issue Type: Improvement > Components: amrmproxy, api, resourcemanager >Reporter: Ellen Hui >Assignee: Ellen Hui > Attachments: 0001-Registry-API-v2.patch, > 0001-YARN-6413-Yarn-Registry-FS-Implementation.patch, > 0002-Registry-API-v2.patch, 0003-Registry-API-api-only.patch, > 0004-Registry-API-api-stubbed.patch, YARN-6413.v1.patch > > > Add a RegistryOperations implementation that writes records to the file > system. This does not include any changes to the API, to avoid compatibility > issues. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6124) Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin -refreshQueues
[ https://issues.apache.org/jira/browse/YARN-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221401#comment-16221401 ] Wangda Tan commented on YARN-6124: -- [~eepayne], Thanks for the review comments. I haven't done any tests so far, will do it in the next update. YARN-7370 doesn't depends on this JIRA, any of the patch can get in first, minor rebase is needed in any case. > Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin > -refreshQueues > - > > Key: YARN-6124 > URL: https://issues.apache.org/jira/browse/YARN-6124 > Project: Hadoop YARN > Issue Type: Task >Reporter: Wangda Tan >Assignee: Wangda Tan > Attachments: YARN-6124.wip.1.patch, YARN-6124.wip.2.patch > > > Now enabled / disable / update SchedulingEditPolicy config requires restart > RM. This is inconvenient when admin wants to make changes to > SchedulingEditPolicies. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6413) Yarn Registry FS implementation
[ https://issues.apache.org/jira/browse/YARN-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ellen Hui updated YARN-6413: Attachment: YARN-6413.v1.patch We've decided to stick with the existing API, in order to avoid the compatibility issues raised by [~jianhe]. This patch now only contains a FS-based implementation and no API changes. > Yarn Registry FS implementation > --- > > Key: YARN-6413 > URL: https://issues.apache.org/jira/browse/YARN-6413 > Project: Hadoop YARN > Issue Type: Improvement > Components: amrmproxy, api, resourcemanager >Reporter: Ellen Hui >Assignee: Ellen Hui > Attachments: 0001-Registry-API-v2.patch, > 0001-YARN-6413-Yarn-Registry-FS-Implementation.patch, > 0002-Registry-API-v2.patch, 0003-Registry-API-api-only.patch, > 0004-Registry-API-api-stubbed.patch, YARN-6413.v1.patch > > > Add a RegistryOperations implementation that writes records to the file > system. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7401) Reduce lock contention in ClusterNodeTracker#getClusterCapacity()
[ https://issues.apache.org/jira/browse/YARN-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated YARN-7401: --- Summary: Reduce lock contention in ClusterNodeTracker#getClusterCapacity() (was: Reduce lock contention in ClusterNodeTracker#getClusterResource()) > Reduce lock contention in ClusterNodeTracker#getClusterCapacity() > - > > Key: YARN-7401 > URL: https://issues.apache.org/jira/browse/YARN-7401 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Affects Versions: 3.1.0 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-7401.001.patch > > > Profiling the code shows massive latency in > {{ClusterNodeTracker.getClusterResource()}} on getting the lock. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7401) Reduce lock contention in ClusterNodeTracker#getClusterCapacity()
[ https://issues.apache.org/jira/browse/YARN-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated YARN-7401: --- Description: Profiling the code shows massive latency in {{ClusterNodeTracker.getClusterCapacity()}} on getting the lock. (was: Profiling the code shows massive latency in {{ClusterNodeTracker.getClusterResource()}} on getting the lock.) > Reduce lock contention in ClusterNodeTracker#getClusterCapacity() > - > > Key: YARN-7401 > URL: https://issues.apache.org/jira/browse/YARN-7401 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Affects Versions: 3.1.0 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-7401.001.patch > > > Profiling the code shows massive latency in > {{ClusterNodeTracker.getClusterCapacity()}} on getting the lock. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7401) Reduce lock contention in ClusterNodeTracker#getClusterResource()
[ https://issues.apache.org/jira/browse/YARN-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated YARN-7401: --- Attachment: YARN-7401.001.patch To get rid of the lock in {{getClusterCapacity()}}, I moved the copy from {{clusterCapacity}} to where {{clusterCapacity}} is modified. Then by making {{staleClusterCapacity}} volatile, I can remove the lock. Tests shows a significant fair scheduler performance increase with this patch. I suspect capacity scheduler will also benefit. > Reduce lock contention in ClusterNodeTracker#getClusterResource() > - > > Key: YARN-7401 > URL: https://issues.apache.org/jira/browse/YARN-7401 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Affects Versions: 3.1.0 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-7401.001.patch > > > Profiling the code shows massive latency in > {{ClusterNodeTracker.getClusterResource()}} on getting the lock. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7401) Reduce lock contention in ClusterNodeTracker#getClusterResource()
Daniel Templeton created YARN-7401: -- Summary: Reduce lock contention in ClusterNodeTracker#getClusterResource() Key: YARN-7401 URL: https://issues.apache.org/jira/browse/YARN-7401 Project: Hadoop YARN Issue Type: Improvement Components: resourcemanager Affects Versions: 3.1.0 Reporter: Daniel Templeton Assignee: Daniel Templeton Profiling the code shows massive latency in {{ClusterNodeTracker.getClusterResource()}} on getting the lock. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7400) incorrect log preview displayed in jobhistory server ui
Santhosh B Gowda created YARN-7400: -- Summary: incorrect log preview displayed in jobhistory server ui Key: YARN-7400 URL: https://issues.apache.org/jira/browse/YARN-7400 Project: Hadoop YARN Issue Type: Bug Components: yarn Affects Versions: 2.7.3 Reporter: Santhosh B Gowda Priority: Blocker In the job history server ui, the container preview log is displayed incorrectly, for e.x launch_container.sh is displaying stderr logs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7320) Duplicate LiteralByteStrings in SystemCredentialsForAppsProto.credentialsForApp_
[ https://issues.apache.org/jira/browse/YARN-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221343#comment-16221343 ] Hadoop QA commented on YARN-7320: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 45s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 47s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 35m 51s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7320 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894008/YARN-7320.01.addendum.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux dd8f4e629805 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 25932da | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/18165/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/18165/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Duplicate LiteralByteStrings in
[jira] [Commented] (YARN-7398) LICENSE.txt is broken in branch-2 by YARN-4849 merge
[ https://issues.apache.org/jira/browse/YARN-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221328#comment-16221328 ] Subru Krishnan commented on YARN-7398: -- [~varun_saxena], thanks for your prompt response. I am generally +1 on keeping the LICENSE consistent with trunk especially considering the work done in HADOOP-13780 but my concern from the branch-2 backport is we dropped a few licenses and picked up few wrong versions (Java 8 instead of 7). Captured a couple below for illustration: {code} @@ -616,8 +614,6 @@ For: hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery-1.10.2.min.js hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/jquery.js hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery -Apache HBase - Server which contains JQuery minified javascript library version 1.8.3 -Microsoft JDBC Driver for SQLServer - version 6.2.1.jre7 Copyright jQuery Foundation and other contributors, https://jquery.org/ @@ -693,9 +689,8 @@ hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/d3-LICENSE The binary distribution of this product bundles these dependencies under the following license: -HSQLDB Database 2.3.4 +HSQLDB Database 2.0.0 -(HSQL License) "COPYRIGHTS AND LICENSES (based on BSD License) {code} > LICENSE.txt is broken in branch-2 by YARN-4849 merge > > > Key: YARN-7398 > URL: https://issues.apache.org/jira/browse/YARN-7398 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 2.9.0 >Reporter: Subru Krishnan >Assignee: Varun Saxena >Priority: Blocker > Attachments: YARN-7398.branch-2.01.patch > > > YARN-4849 (commit sha id 56654d8820f345fdefd6a3f81836125aa67adbae) seems to > have been based out of stale version of LICENSE.txt, for e.g: HSQLDB, gtest > etc, so I have reverted it. > [~leftnoteasy]/[~sunilg], can you guys take a look and fix the UI v2 licenses > asap. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7276) Federation Router Web Service fixes
[ https://issues.apache.org/jira/browse/YARN-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated YARN-7276: -- Attachment: YARN-7276.013.patch > Federation Router Web Service fixes > --- > > Key: YARN-7276 > URL: https://issues.apache.org/jira/browse/YARN-7276 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Attachments: YARN-7276-branch-2.000.patch, > YARN-7276-branch-2.001.patch, YARN-7276-branch-2.002.patch, > YARN-7276-branch-2.003.patch, YARN-7276-branch-2.004.patch, > YARN-7276.000.patch, YARN-7276.001.patch, YARN-7276.002.patch, > YARN-7276.003.patch, YARN-7276.004.patch, YARN-7276.005.patch, > YARN-7276.006.patch, YARN-7276.007.patch, YARN-7276.009.patch, > YARN-7276.010.patch, YARN-7276.011.patch, YARN-7276.012.patch, > YARN-7276.013.patch > > > While testing YARN-3661, I found a few issues with the REST interface in the > Router: > * No support for empty content (error 204) > * Media type support > * Attributes in {{FederationInterceptorREST}} > * Support for empty states and labels > * DefaultMetricsSystem initialization is missing -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7389) Make TestResourceManager Scheduler agnostic
[ https://issues.apache.org/jira/browse/YARN-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221318#comment-16221318 ] Subru Krishnan commented on YARN-7389: -- Thanks [~rkanter] for doing this, appreciated. > Make TestResourceManager Scheduler agnostic > --- > > Key: YARN-7389 > URL: https://issues.apache.org/jira/browse/YARN-7389 > Project: Hadoop YARN > Issue Type: Improvement > Components: test >Affects Versions: 3.0.0 >Reporter: Robert Kanter >Assignee: Robert Kanter > Fix For: 3.0.0 > > Attachments: YARN-7389.001.patch > > > Many of the tests in {{TestResourceManager}} override the scheduler to always > be {{CapacityScheduler}}. However, these tests should be made scheduler > agnostic (they are testing the RM, not the scheduler). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7190) Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user classpath
[ https://issues.apache.org/jira/browse/YARN-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221295#comment-16221295 ] Hadoop QA commented on YARN-7190: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 0s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 32s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 54s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 15m 45s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 7m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 7m 38s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 4s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 52s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 52s{color} | {color:red} root generated 522 new + 726 unchanged - 0 fixed = 1248 total (was 726) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 24s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 12s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 6s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 7m 43s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 12s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s{color} | {color:green} hadoop-assemblies in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 3s{color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 59s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 47m 19s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 31s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 49s{color} | {color:red} hadoop-yarn-project in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}313m 0s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter | | |
[jira] [Updated] (YARN-6413) Yarn Registry FS implementation
[ https://issues.apache.org/jira/browse/YARN-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ellen Hui updated YARN-6413: Attachment: 0001-YARN-6413-Yarn-Registry-FS-Implementation.patch First patch with no API changes, only store implementation. > Yarn Registry FS implementation > --- > > Key: YARN-6413 > URL: https://issues.apache.org/jira/browse/YARN-6413 > Project: Hadoop YARN > Issue Type: Improvement > Components: amrmproxy, api, resourcemanager >Reporter: Ellen Hui >Assignee: Ellen Hui > Attachments: 0001-Registry-API-v2.patch, > 0001-YARN-6413-Yarn-Registry-FS-Implementation.patch, > 0002-Registry-API-v2.patch, 0003-Registry-API-api-only.patch, > 0004-Registry-API-api-stubbed.patch > > > Add a RegistryOperations implementation that writes records to the file > system. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7262) Add a hierarchy into the ZKRMStateStore for delegation token znodes to prevent jute buffer overflow
[ https://issues.apache.org/jira/browse/YARN-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221268#comment-16221268 ] Robert Kanter commented on YARN-7262: - Test failures are unrelated: YARN-6747 and YARN-7080 Thanks for the reviews [~templedf]; will commit later today. > Add a hierarchy into the ZKRMStateStore for delegation token znodes to > prevent jute buffer overflow > --- > > Key: YARN-7262 > URL: https://issues.apache.org/jira/browse/YARN-7262 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: Robert Kanter >Assignee: Robert Kanter > Attachments: YARN-7262.001.patch, YARN-7262.002.patch, > YARN-7262.003.patch, YARN-7262.003.patch > > > We've seen users who are running into a problem where the RM is storing so > many delegation tokens in the {{ZKRMStateStore}} that the _listing_ of those > znodes is higher than the jute buffer. This is fine during operations, but > becomes a problem on a fail over because the RM will try to read in all of > the token znodes (i.e. call {{getChildren}} on the parent znode). This is > particularly bad because everything appears to be okay, but then if a > failover occurs you end up with no active RMs. > There was a similar problem with the Yarn application data that was fixed in > YARN-2962 by adding a (configurable) hierarchy of znodes so the RM could pull > subchildren without overflowing the jute buffer (though it's off by default). > We should add a hierarchy similar to that of YARN-2962, but for the > delegation token znodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7262) Add a hierarchy into the ZKRMStateStore for delegation token znodes to prevent jute buffer overflow
[ https://issues.apache.org/jira/browse/YARN-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221264#comment-16221264 ] Hadoop QA commented on YARN-7262: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 19s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 33s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 45s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 55s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 9 new + 276 unchanged - 1 fixed = 285 total (was 277) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 41s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 31s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 35s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 5s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}127m 49s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation | | Timed out junit tests | org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7262 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894193/YARN-7262.003.patch | | Optional Tests | asflicense compile javac javadoc
[jira] [Updated] (YARN-7394) Merge code paths for Reservation/Plan queues and Auto Created queues
[ https://issues.apache.org/jira/browse/YARN-7394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suma Shivaprasad updated YARN-7394: --- Attachment: YARN-7394.1.patch Rebased with trunk > Merge code paths for Reservation/Plan queues and Auto Created queues > > > Key: YARN-7394 > URL: https://issues.apache.org/jira/browse/YARN-7394 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Reporter: Suma Shivaprasad >Assignee: Suma Shivaprasad > Attachments: YARN-7394.1.patch, YARN-7394.patch > > > The initialization/reinitialization logic for ReservationQueue and > AutoCreated Leaf queues are similar. The proposal is to rename > ReservationQueue to a more generic name AutoCreatedLeafQueue which are either > managed by PlanQueue(already exists) or AutoCreateEnabledParentQueue (new > class). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7332) Compute effectiveCapacity per each resource vector
[ https://issues.apache.org/jira/browse/YARN-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221251#comment-16221251 ] Wangda Tan commented on YARN-7332: -- [~sunilg], Floor makes more sense to me. +1 to latest patch, could you confirm if failed unit tests are not related? I will commit the patch once get confirmation from you. Thanks, > Compute effectiveCapacity per each resource vector > -- > > Key: YARN-7332 > URL: https://issues.apache.org/jira/browse/YARN-7332 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Affects Versions: YARN-5881 >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-7332.YARN-5881.001.patch, > YARN-7332.YARN-5881.002.patch, YARN-7332.YARN-5881.003.patch > > > Currently effective capacity uses a generalized approach based on dominance. > Hence some vectors may not be calculated correctly. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7397) Reduce lock contention in FairScheduler#getAppWeight()
[ https://issues.apache.org/jira/browse/YARN-7397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221247#comment-16221247 ] Yufei Gu commented on YARN-7397: Nice finding! Thanks for working on this. The patch looks good to me. I was wandering {{getAppWeight()}} isn't necessary to be in class FairScheduler, {{FSAppAttempt}} seems a good place to hold that, but it may involve some lock rewriting. I don't think it is worth to do in this jira if it is two complicated. > Reduce lock contention in FairScheduler#getAppWeight() > -- > > Key: YARN-7397 > URL: https://issues.apache.org/jira/browse/YARN-7397 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Affects Versions: 3.0.0-beta1 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-7397.001.patch > > > In profiling the fair scheduler, a large amount of time is spent waiting to > get the lock in {{FairScheduler.getAppWeight()}}, when the lock isn't > actually needed. This patch reduces the scope of the lock to eliminate that > contention. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7330) Add support to show GPU on UI/metrics
[ https://issues.apache.org/jira/browse/YARN-7330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-7330: - Attachment: YARN-7330.1-wip.patch Attached ver.1 wip patch (rebased on top of YARN-7224). > Add support to show GPU on UI/metrics > - > > Key: YARN-7330 > URL: https://issues.apache.org/jira/browse/YARN-7330 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Blocker > Attachments: YARN-7330.0-wip.patch, YARN-7330.1-wip.patch, > screencapture-0-wip.png > > > We should be able to view GPU metrics from UI/REST API. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7395) NM fails to successfully kill tasks that run over their memory limit
[ https://issues.apache.org/jira/browse/YARN-7395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221228#comment-16221228 ] Eric Badger commented on YARN-7395: --- {noformat} Oct 26 15:49:46 xxx.xxx.xxx dockerd-current: time="2017-10-26T15:49:46.287169432Z" level=error msg="Handler for POST /v1.24/containers/%27container_e127_1508997850588_0001_02_01%27/stop?t=10 returned error: No such container: 'container_e127_1508997850588_0001_02_01'" Oct 26 15:49:46 xxx.xxx.xxx dockerd-current: time="2017-10-26T15:49:46.287193005Z" level=error msg="Handler for POST /v1.24/containers/'container_e127_1508997850588_0001_02_01'/stop returned error: No such container: 'container_e127_1508997850588_0001_02_01'" {noformat} Update: Looks like the docker stop command is failing because it's including the {{'}} in the container name. It ends up not finding the container because of that, which is why it fails with exit code 1. Not sure if this will be a problem in branch-2/trunk because of the refactoring that came with YARN-6623. Our internal branch has not currently pulled back YARN-6623. > NM fails to successfully kill tasks that run over their memory limit > > > Key: YARN-7395 > URL: https://issues.apache.org/jira/browse/YARN-7395 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Eric Badger > > The NM correctly notes that the container is over its configured limit, but > then fails to successfully kill the process. So the Docker container AM stays > around and the job keeps running -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (YARN-7395) NM fails to successfully kill tasks that run over their memory limit
[ https://issues.apache.org/jira/browse/YARN-7395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated YARN-7395: -- Comment: was deleted (was: {noformat} Oct 26 15:49:46 fsta100n11.tan.ygrid.yahoo.com dockerd-current: time="2017-10-26T15:49:46.287169432Z" level=error msg="Handler for POST /v1.24/containers/%27container_e127_1508997850588_0001_02_01%27/stop?t=10 returned error: No such container: 'container_e127_1508997850588_0001_02_01'" Oct 26 15:49:46 fsta100n11.tan.ygrid.yahoo.com dockerd-current: time="2017-10-26T15:49:46.287193005Z" level=error msg="Handler for POST /v1.24/containers/'container_e127_1508997850588_0001_02_01'/stop returned error: No such container: 'container_e127_1508997850588_0001_02_01'" {noformat} Update: Looks like the docker stop command is failing because it's including the {{'}} in the container name. It ends up not finding the container because of that, which is why it fails with exit code 1. Not sure if this will be a problem in branch-2/trunk because of the refactoring that came with YARN-6623. Our internal branch has not currently pulled back YARN-6623. ) > NM fails to successfully kill tasks that run over their memory limit > > > Key: YARN-7395 > URL: https://issues.apache.org/jira/browse/YARN-7395 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Eric Badger > > The NM correctly notes that the container is over its configured limit, but > then fails to successfully kill the process. So the Docker container AM stays > around and the job keeps running -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7378) Documentation changes post branch-2 merge
[ https://issues.apache.org/jira/browse/YARN-7378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vrushali C updated YARN-7378: - Fix Version/s: (was: 2.9.0) > Documentation changes post branch-2 merge > - > > Key: YARN-7378 > URL: https://issues.apache.org/jira/browse/YARN-7378 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineclient, timelinereader, timelineserver >Reporter: Varun Saxena >Assignee: Vrushali C > > Need to update the documentation for the schema creator command. It should > include the timeline-service-hbase jar as well as hbase-server jar in > classpath when the command is to be run. Due to YARN-7190 classpath changes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7398) LICENSE.txt is broken in branch-2 by YARN-4849 merge
[ https://issues.apache.org/jira/browse/YARN-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vrushali C updated YARN-7398: - Issue Type: Sub-task (was: Bug) Parent: YARN-7055 > LICENSE.txt is broken in branch-2 by YARN-4849 merge > > > Key: YARN-7398 > URL: https://issues.apache.org/jira/browse/YARN-7398 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 2.9.0 >Reporter: Subru Krishnan >Assignee: Varun Saxena >Priority: Blocker > Attachments: YARN-7398.branch-2.01.patch > > > YARN-4849 (commit sha id 56654d8820f345fdefd6a3f81836125aa67adbae) seems to > have been based out of stale version of LICENSE.txt, for e.g: HSQLDB, gtest > etc, so I have reverted it. > [~leftnoteasy]/[~sunilg], can you guys take a look and fix the UI v2 licenses > asap. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7398) LICENSE.txt is broken in branch-2 by YARN-4849 merge
[ https://issues.apache.org/jira/browse/YARN-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221222#comment-16221222 ] Vrushali C commented on YARN-7398: -- Hi [~wangda] Looks like Varun has made this file consistent with trunk as per his latest comment. Since it's already night in India, thought of asking you if you have any thoughts on that? > LICENSE.txt is broken in branch-2 by YARN-4849 merge > > > Key: YARN-7398 > URL: https://issues.apache.org/jira/browse/YARN-7398 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Subru Krishnan >Assignee: Varun Saxena >Priority: Blocker > Attachments: YARN-7398.branch-2.01.patch > > > YARN-4849 (commit sha id 56654d8820f345fdefd6a3f81836125aa67adbae) seems to > have been based out of stale version of LICENSE.txt, for e.g: HSQLDB, gtest > etc, so I have reverted it. > [~leftnoteasy]/[~sunilg], can you guys take a look and fix the UI v2 licenses > asap. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7395) NM fails to successfully kill tasks that run over their memory limit
[ https://issues.apache.org/jira/browse/YARN-7395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221219#comment-16221219 ] Eric Badger commented on YARN-7395: --- {noformat} Oct 26 15:49:46 fsta100n11.tan.ygrid.yahoo.com dockerd-current: time="2017-10-26T15:49:46.287169432Z" level=error msg="Handler for POST /v1.24/containers/%27container_e127_1508997850588_0001_02_01%27/stop?t=10 returned error: No such container: 'container_e127_1508997850588_0001_02_01'" Oct 26 15:49:46 fsta100n11.tan.ygrid.yahoo.com dockerd-current: time="2017-10-26T15:49:46.287193005Z" level=error msg="Handler for POST /v1.24/containers/'container_e127_1508997850588_0001_02_01'/stop returned error: No such container: 'container_e127_1508997850588_0001_02_01'" {noformat} Update: Looks like the docker stop command is failing because it's including the {{'}} in the container name. It ends up not finding the container because of that, which is why it fails with exit code 1. Not sure if this will be a problem in branch-2/trunk because of the refactoring that came with YARN-6623. Our internal branch has not currently pulled back YARN-6623. > NM fails to successfully kill tasks that run over their memory limit > > > Key: YARN-7395 > URL: https://issues.apache.org/jira/browse/YARN-7395 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Eric Badger > > The NM correctly notes that the container is over its configured limit, but > then fails to successfully kill the process. So the Docker container AM stays > around and the job keeps running -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7336) Unsafe cast from long to int Resource.hashCode() method
[ https://issues.apache.org/jira/browse/YARN-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated YARN-7336: --- Labels: ready-to-commit (was: ) > Unsafe cast from long to int Resource.hashCode() method > --- > > Key: YARN-7336 > URL: https://issues.apache.org/jira/browse/YARN-7336 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 3.0.0-beta1, 3.1.0 >Reporter: Daniel Templeton >Assignee: Miklos Szegedi >Priority: Critical > Labels: ready-to-commit > Attachments: YARN-7336.000.patch, YARN-7336.001.patch > > > For example: > {code} > final int prime = 47; > long result = 0; > for (ResourceInformation entry : resources) { > result = prime * result + entry.hashCode(); > } > return (int) result; > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7336) Unsafe cast from long to int Resource.hashCode() method
[ https://issues.apache.org/jira/browse/YARN-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221215#comment-16221215 ] Daniel Templeton commented on YARN-7336: LGTM +1 > Unsafe cast from long to int Resource.hashCode() method > --- > > Key: YARN-7336 > URL: https://issues.apache.org/jira/browse/YARN-7336 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 3.0.0-beta1, 3.1.0 >Reporter: Daniel Templeton >Assignee: Miklos Szegedi >Priority: Critical > Labels: ready-to-commit > Attachments: YARN-7336.000.patch, YARN-7336.001.patch > > > For example: > {code} > final int prime = 47; > long result = 0; > for (ResourceInformation entry : resources) { > result = prime * result + entry.hashCode(); > } > return (int) result; > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7320) Duplicate LiteralByteStrings in SystemCredentialsForAppsProto.credentialsForApp_
[ https://issues.apache.org/jira/browse/YARN-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221214#comment-16221214 ] Daniel Templeton commented on YARN-7320: I bumped Jenkins for you. > Duplicate LiteralByteStrings in > SystemCredentialsForAppsProto.credentialsForApp_ > > > Key: YARN-7320 > URL: https://issues.apache.org/jira/browse/YARN-7320 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev > Fix For: 3.0.0 > > Attachments: YARN-7320.01.addendum.patch, YARN-7320.01.patch, > YARN-7320.02.patch > > > Using jxray (www.jxray.com) I've analyzed several heap dumps from YARN > Resource Manager running in a big cluster. The tool uncovered several sources > of memory waste. One problem, which results in wasting more than a quarter of > all memory, is a large number of duplicate {{LiteralByteString}} objects > coming from the following reference chain: > {code} > 1,011,810K (26.9%): byte[]: 5416705 / 100% dup arrays (22108 unique) > ↖com.google.protobuf.LiteralByteString.bytes > ↖org.apache.hadoop.yarn.proto.YarnServerCommonServiceProtos$.credentialsForApp_ > ↖{j.u.ArrayList} > ↖j.u.Collections$UnmodifiableRandomAccessList.c > ↖org.apache.hadoop.yarn.proto.YarnServerCommonServiceProtos$NodeHeartbeatResponseProto.systemCredentialsForApps_ > ↖org.apache.hadoop.yarn.server.api.protocolrecords.impl.pb.NodeHeartbeatResponsePBImpl.proto > ↖org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl.latestNodeHeartBeatResponse > ↖org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerNode.rmNode > ... > {code} > That is, collectively reference chains that look as above hold in memory 5.4 > million {{LiteralByteString}} objects, but only ~22 thousand of these objects > are unique. Deduplicating these objects, e.g. using a Google Object Interner > instance, would save ~1GB of memory. > It looks like the main place where the above {{LiteralByteString}}s are > created and attached to the {{SystemCredentialsForAppsProto}} objects is in > {{NodeHeartbeatResponsePBImpl.java}}, method > {{addSystemCredentialsToProto()}}. Probably adding a call to an interner > there will fix the problem. wi -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6927) Add support for individual resource types requests in MapReduce
[ https://issues.apache.org/jira/browse/YARN-6927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221209#comment-16221209 ] Daniel Templeton commented on YARN-6927: So close! You just need to add periods and the end of the javadocs, per the checkstyle comments. > Add support for individual resource types requests in MapReduce > --- > > Key: YARN-6927 > URL: https://issues.apache.org/jira/browse/YARN-6927 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Daniel Templeton >Assignee: Gergo Repas > Attachments: YARN-6927.000.patch, YARN-6927.001.patch, > YARN-6927.002.patch, YARN-6927.003.patch, YARN-6927.004.patch, > YARN-6927.005.patch, YARN-6927.006.patch, YARN-6927.007.patch > > > YARN-6504 adds support for resource profiles in MapReduce jobs, but resource > profiles don't give users much flexibility in their resource requests. To > satisfy users' needs, MapReduce should also allow users to specify arbitrary > resource requests. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7320) Duplicate LiteralByteStrings in SystemCredentialsForAppsProto.credentialsForApp_
[ https://issues.apache.org/jira/browse/YARN-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221206#comment-16221206 ] Misha Dmitriev commented on YARN-7320: -- [~rkanter] [~wangda] looks like in ~24 hrs Jenkins still hasn't processed my patch. Could you please check what's going on? > Duplicate LiteralByteStrings in > SystemCredentialsForAppsProto.credentialsForApp_ > > > Key: YARN-7320 > URL: https://issues.apache.org/jira/browse/YARN-7320 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev > Fix For: 3.0.0 > > Attachments: YARN-7320.01.addendum.patch, YARN-7320.01.patch, > YARN-7320.02.patch > > > Using jxray (www.jxray.com) I've analyzed several heap dumps from YARN > Resource Manager running in a big cluster. The tool uncovered several sources > of memory waste. One problem, which results in wasting more than a quarter of > all memory, is a large number of duplicate {{LiteralByteString}} objects > coming from the following reference chain: > {code} > 1,011,810K (26.9%): byte[]: 5416705 / 100% dup arrays (22108 unique) > ↖com.google.protobuf.LiteralByteString.bytes > ↖org.apache.hadoop.yarn.proto.YarnServerCommonServiceProtos$.credentialsForApp_ > ↖{j.u.ArrayList} > ↖j.u.Collections$UnmodifiableRandomAccessList.c > ↖org.apache.hadoop.yarn.proto.YarnServerCommonServiceProtos$NodeHeartbeatResponseProto.systemCredentialsForApps_ > ↖org.apache.hadoop.yarn.server.api.protocolrecords.impl.pb.NodeHeartbeatResponsePBImpl.proto > ↖org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl.latestNodeHeartBeatResponse > ↖org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerNode.rmNode > ... > {code} > That is, collectively reference chains that look as above hold in memory 5.4 > million {{LiteralByteString}} objects, but only ~22 thousand of these objects > are unique. Deduplicating these objects, e.g. using a Google Object Interner > instance, would save ~1GB of memory. > It looks like the main place where the above {{LiteralByteString}}s are > created and attached to the {{SystemCredentialsForAppsProto}} objects is in > {{NodeHeartbeatResponsePBImpl.java}}, method > {{addSystemCredentialsToProto()}}. Probably adding a call to an interner > there will fix the problem. wi -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7224) Support GPU isolation for docker container
[ https://issues.apache.org/jira/browse/YARN-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221198#comment-16221198 ] Wangda Tan commented on YARN-7224: -- Failed unit tests are not related to this patch, TestNodeStatusUpdater failure is related to YARN-7320. And deploy the latest patch on GPU cluster, ran Tensorflow via distributed shell job which requests GPU, didn't see any issue. > Support GPU isolation for docker container > -- > > Key: YARN-7224 > URL: https://issues.apache.org/jira/browse/YARN-7224 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan > Attachments: YARN-7224.001.patch, YARN-7224.002-wip.patch, > YARN-7224.003.patch, YARN-7224.004.patch, YARN-7224.005.patch, > YARN-7224.006.patch, YARN-7224.007.patch, YARN-7224.008.patch > > > This patch is to address issues when docker container is being used: > 1. GPU driver and nvidia libraries: If GPU drivers and NV libraries are > pre-packaged inside docker image, it could conflict to driver and > nvidia-libraries installed on Host OS. An alternative solution is to detect > Host OS's installed drivers and devices, mount it when launch docker > container. Please refer to \[1\] for more details. > 2. Image detection: > From \[2\], the challenge is: > bq. Mounting user-level driver libraries and device files clobbers the > environment of the container, it should be done only when the container is > running a GPU application. The challenge here is to determine if a given > image will be using the GPU or not. We should also prevent launching > containers based on a Docker image that is incompatible with the host NVIDIA > driver version, you can find more details on this wiki page. > 3. GPU isolation. > *Proposed solution*: > a. Use nvidia-docker-plugin \[3\] to address issue #1, this is the same > solution used by K8S \[4\]. issue #2 could be addressed in a separate JIRA. > We won't ship nvidia-docker-plugin with out releases and we require cluster > admin to preinstall nvidia-docker-plugin to use GPU+docker support on YARN. > "nvidia-docker" is a wrapper of docker binary which can address #3 as well, > however "nvidia-docker" doesn't provide same semantics of docker, and it > needs to setup additional environments such as PATH/LD_LIBRARY_PATH to use > it. To avoid introducing additional issues, we plan to use > nvidia-docker-plugin + docker binary approach. > b. To address GPU driver and nvidia libraries, we uses nvidia-docker-plugin > \[3\] to create a volume which includes GPU-related libraries and mount it > when docker container being launched. Changes include: > - Instead of using {{volume-driver}}, this patch added {{docker volume > create}} command to c-e and NM Java side. The reason is {{volume-driver}} can > only use single volume driver for each launched docker container. > - Updated {{c-e}} and Java side, if a mounted volume is a named volume in > docker, skip checking file existence. (Named-volume still need to be added to > permitted list of container-executor.cfg). > c. To address isolation issue: > We found that, cgroup + docker doesn't work under newer docker version which > uses {{runc}} as default runtime. Setting {{--cgroup-parent}} to a cgroup > which include any {{devices.deny}} causes docker container cannot be launched. > Instead this patch passes allowed GPU devices via {{--device}} to docker > launch command. > References: > \[1\] https://github.com/NVIDIA/nvidia-docker/wiki/NVIDIA-driver > \[2\] https://github.com/NVIDIA/nvidia-docker/wiki/Image-inspection > \[3\] https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker-plugin > \[4\] https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/ -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7307) Revisit resource-types.xml loading behaviors
[ https://issues.apache.org/jira/browse/YARN-7307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221193#comment-16221193 ] Wangda Tan commented on YARN-7307: -- +1, will commit the patch by end of today if no objections. > Revisit resource-types.xml loading behaviors > > > Key: YARN-7307 > URL: https://issues.apache.org/jira/browse/YARN-7307 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Wangda Tan >Assignee: Sunil G >Priority: Blocker > Attachments: YARN-7307.001.patch, YARN-7307.002.patch, > YARN-7307.003.patch, YARN-7307.004.patch > > > Existing feature requires every client has a resource-types.xml in order to > use multiple resource types, should we allow client/AM update supported > resource types via Yarn APIs? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7224) Support GPU isolation for docker container
[ https://issues.apache.org/jira/browse/YARN-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221186#comment-16221186 ] Hadoop QA commented on YARN-7224: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 13 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 46s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 21s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 17s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 20s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 1s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 40 new + 387 unchanged - 12 fixed = 427 total (was 399) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 52s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 53s{color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 37s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 37s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 59s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not
[jira] [Commented] (YARN-7389) Make TestResourceManager Scheduler agnostic
[ https://issues.apache.org/jira/browse/YARN-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221182#comment-16221182 ] Robert Kanter commented on YARN-7389: - I just uploaded a branch-2 version of YARN-7146 there. Once that's in, I'll backport YARN-7389 to branch-2. > Make TestResourceManager Scheduler agnostic > --- > > Key: YARN-7389 > URL: https://issues.apache.org/jira/browse/YARN-7389 > Project: Hadoop YARN > Issue Type: Improvement > Components: test >Affects Versions: 3.0.0 >Reporter: Robert Kanter >Assignee: Robert Kanter > Fix For: 3.0.0 > > Attachments: YARN-7389.001.patch > > > Many of the tests in {{TestResourceManager}} override the scheduler to always > be {{CapacityScheduler}}. However, these tests should be made scheduler > agnostic (they are testing the RM, not the scheduler). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6927) Add support for individual resource types requests in MapReduce
[ https://issues.apache.org/jira/browse/YARN-6927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221169#comment-16221169 ] Hadoop QA commented on YARN-6927: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 46s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 15s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 52s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 2s{color} | {color:orange} root: The patch generated 9 new + 879 unchanged - 3 fixed = 888 total (was 882) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 0s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 39s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 58s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 47s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 53s{color} | {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 34s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}226m 15s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.mapred.pipes.TestPipeApplication | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-6927 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894114/YARN-6927.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 20e99abb9c35 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12
[jira] [Updated] (YARN-7146) Many RM unit tests failing with FairScheduler
[ https://issues.apache.org/jira/browse/YARN-7146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Kanter updated YARN-7146: Attachment: YARN-7146.004.branch-2.patch Uploading branch-2 version of the 004 patch. It's mostly the same, but here's the differences: - Misc lines numbers and whatnot - An additional method call had to be relocated from {{FairScheduler.UpdateThread}} to {{AbstractYarnScheduler.UpdateThread}} - Replace Java 8 Lambda code in {{ParameterizedSchedulerTestBase}} with Java 7-compatible code - Trivial change to constructor in {{TestWorkPreservingUnmanagedAM}} > Many RM unit tests failing with FairScheduler > - > > Key: YARN-7146 > URL: https://issues.apache.org/jira/browse/YARN-7146 > Project: Hadoop YARN > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-beta1 >Reporter: Robert Kanter >Assignee: Robert Kanter > Fix For: 3.0.0-beta1, 3.1.0 > > Attachments: YARN-7146.001.patch, YARN-7146.002.patch, > YARN-7146.003.patch, YARN-7146.004.branch-2.patch, YARN-7146.004.patch > > > Many of the RM unit tests are failing when using the FairScheduler. > Here is a list of affected test classes: > {noformat} > TestYarnClient > TestApplicationCleanup > TestApplicationMasterLauncher > TestDecommissioningNodesWatcher > TestKillApplicationWithRMHA > TestNodeBlacklistingOnAMFailures > TestRM > TestRMAdminService > TestRMRestart > TestResourceTrackerService > TestWorkPreservingRMRestart > TestAMRMRPCNodeUpdates > TestAMRMRPCResponseId > TestAMRestart > TestApplicationLifetimeMonitor > TestNodesListManager > TestRMContainerImpl > TestAbstractYarnScheduler > TestSchedulerUtils > TestFairOrderingPolicy > TestAMRMTokens > TestDelegationTokenRenewer > {noformat} > Most of the test methods in these classes are failing, though some do succeed. > There's two main categories of issues: > # The test submits an application to the {{MockRM}} and waits for it to enter > a specific state, which it never does, and the test times out. We need to > call {{update()}} on the scheduler. > # The test throws a {{ClassCastException}} on {{FSQueueMetrics}} to > {{CSQueueMetrics}}. This is because {{QueueMetrics}} metrics are static, and > a previous test using FairScheduler initialized it, and the current test is > using CapacityScheduler. We need to reset the metrics. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7146) Many RM unit tests failing with FairScheduler
[ https://issues.apache.org/jira/browse/YARN-7146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Kanter updated YARN-7146: Target Version/s: 3.0.0-beta1, 2.9.0 (was: 3.0.0-beta1) > Many RM unit tests failing with FairScheduler > - > > Key: YARN-7146 > URL: https://issues.apache.org/jira/browse/YARN-7146 > Project: Hadoop YARN > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-beta1 >Reporter: Robert Kanter >Assignee: Robert Kanter > Fix For: 3.0.0-beta1, 3.1.0 > > Attachments: YARN-7146.001.patch, YARN-7146.002.patch, > YARN-7146.003.patch, YARN-7146.004.branch-2.patch, YARN-7146.004.patch > > > Many of the RM unit tests are failing when using the FairScheduler. > Here is a list of affected test classes: > {noformat} > TestYarnClient > TestApplicationCleanup > TestApplicationMasterLauncher > TestDecommissioningNodesWatcher > TestKillApplicationWithRMHA > TestNodeBlacklistingOnAMFailures > TestRM > TestRMAdminService > TestRMRestart > TestResourceTrackerService > TestWorkPreservingRMRestart > TestAMRMRPCNodeUpdates > TestAMRMRPCResponseId > TestAMRestart > TestApplicationLifetimeMonitor > TestNodesListManager > TestRMContainerImpl > TestAbstractYarnScheduler > TestSchedulerUtils > TestFairOrderingPolicy > TestAMRMTokens > TestDelegationTokenRenewer > {noformat} > Most of the test methods in these classes are failing, though some do succeed. > There's two main categories of issues: > # The test submits an application to the {{MockRM}} and waits for it to enter > a specific state, which it never does, and the test times out. We need to > call {{update()}} on the scheduler. > # The test throws a {{ClassCastException}} on {{FSQueueMetrics}} to > {{CSQueueMetrics}}. This is because {{QueueMetrics}} metrics are static, and > a previous test using FairScheduler initialized it, and the current test is > using CapacityScheduler. We need to reset the metrics. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7393) RegistryDNS doesn't work in tcp channel
[ https://issues.apache.org/jira/browse/YARN-7393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221158#comment-16221158 ] Hadoop QA commented on YARN-7393: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 21m 47s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} yarn-native-services Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 30s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 30s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} yarn-native-services passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 44s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 63m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0de40f0 | | JIRA Issue | YARN-7393 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894192/YARN-7393.yarn-native-services.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c0a3ef7c91c6 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | yarn-native-services / f53aa3e | | maven | version: Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/18161/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/18161/console |
[jira] [Reopened] (YARN-7146) Many RM unit tests failing with FairScheduler
[ https://issues.apache.org/jira/browse/YARN-7146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Kanter reopened YARN-7146: - Reopening for branch-2 version of the patch > Many RM unit tests failing with FairScheduler > - > > Key: YARN-7146 > URL: https://issues.apache.org/jira/browse/YARN-7146 > Project: Hadoop YARN > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-beta1 >Reporter: Robert Kanter >Assignee: Robert Kanter > Fix For: 3.0.0-beta1, 3.1.0 > > Attachments: YARN-7146.001.patch, YARN-7146.002.patch, > YARN-7146.003.patch, YARN-7146.004.patch > > > Many of the RM unit tests are failing when using the FairScheduler. > Here is a list of affected test classes: > {noformat} > TestYarnClient > TestApplicationCleanup > TestApplicationMasterLauncher > TestDecommissioningNodesWatcher > TestKillApplicationWithRMHA > TestNodeBlacklistingOnAMFailures > TestRM > TestRMAdminService > TestRMRestart > TestResourceTrackerService > TestWorkPreservingRMRestart > TestAMRMRPCNodeUpdates > TestAMRMRPCResponseId > TestAMRestart > TestApplicationLifetimeMonitor > TestNodesListManager > TestRMContainerImpl > TestAbstractYarnScheduler > TestSchedulerUtils > TestFairOrderingPolicy > TestAMRMTokens > TestDelegationTokenRenewer > {noformat} > Most of the test methods in these classes are failing, though some do succeed. > There's two main categories of issues: > # The test submits an application to the {{MockRM}} and waits for it to enter > a specific state, which it never does, and the test times out. We need to > call {{update()}} on the scheduler. > # The test throws a {{ClassCastException}} on {{FSQueueMetrics}} to > {{CSQueueMetrics}}. This is because {{QueueMetrics}} metrics are static, and > a previous test using FairScheduler initialized it, and the current test is > using CapacityScheduler. We need to reset the metrics. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5516) Add REST API for supporting recurring reservations
[ https://issues.apache.org/jira/browse/YARN-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221145#comment-16221145 ] Hudson commented on YARN-5516: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13140 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13140/]) YARN-5516. Add REST API for supporting recurring reservations. (Sean Po (subu: rev 25932da6d1ee56299c8f9911576a42792c435407) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestReservationInputValidator.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/PlanContext.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/submit-reservation.json * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ReservationDefinitionInfo.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestInMemoryPlan.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/ReservationInputValidator.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerRest.md * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/InMemoryPlan.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesReservation.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ReservationDefinition.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ReservationDefinitionPBImpl.java > Add REST API for supporting recurring reservations > -- > > Key: YARN-5516 > URL: https://issues.apache.org/jira/browse/YARN-5516 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Sangeetha Abdu Jyothi >Assignee: Sean Po > Fix For: 3.0.0, 3.1.0 > > Attachments: YARN-5516.v001.patch, YARN-5516.v002.patch, > YARN-5516.v003.patch, YARN-5516.v004.patch, YARN-5516.v005.patch, > YARN-5516.v006.patch > > > YARN-5516 changing REST API of the reservation system to support periodicity. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7217) Improve API service usability for updating service spec and state
[ https://issues.apache.org/jira/browse/YARN-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221142#comment-16221142 ] Eric Yang commented on YARN-7217: - I opened YARN-7399 as storage improvement for YARN service metadata. > Improve API service usability for updating service spec and state > - > > Key: YARN-7217 > URL: https://issues.apache.org/jira/browse/YARN-7217 > Project: Hadoop YARN > Issue Type: Task > Components: api, applications >Reporter: Eric Yang >Assignee: Eric Yang > Attachments: YARN-7217.yarn-native-services.001.patch, > YARN-7217.yarn-native-services.002.patch, > YARN-7217.yarn-native-services.003.patch, > YARN-7217.yarn-native-services.004.patch, > YARN-7217.yarn-native-services.005.patch > > > API service for deploy, and manage YARN services have several limitations. > {{updateService}} API provides multiple functions: > # Stopping a service. > # Start a service. > # Increase or decrease number of containers. (This was removed in YARN-7323). > The overloading is buggy depending on how the configuration should be applied. > h4. Scenario 1 > A user retrieves Service object from getService call, and the Service object > contains state: STARTED. The user would like to increase number of > containers for the deployed service. The JSON has been updated to increase > container count. The PUT method does not actually increase container count. > h4. Scenario 2 > A user retrieves Service object from getService call, and the Service object > contains state: STOPPED. The user would like to make a environment > configuration change. The configuration does not get updated after PUT > method. > This is possible to address by rearranging the logic of START/STOP after > configuration update. However, there are other potential combinations that > can break PUT method. For example, user like to make configuration changes, > but not yet restart the service until a later time. > h4. Scenario 3 > There is no API to list all deployed applications by the same user. > h4. Scenario 4 > Desired state (spec) and current state are represented by the same Service > object. There is no easy way to identify "state" is desired state to reach > or, the current state of the service. It would be nice to have ability to > retrieve both desired state, and current state with separated entry points. > By implementing /spec and /state, it can resolve this problem. > h4. Scenario 5 > List all services deploy by the same user can trigger a directory listing > operation on namenode if hdfs is used as storage for metadata. When hundred > of users use Service UI to view or deploy applications, this will trigger > denial of services attack on namenode. The sparse small metadata files also > reduce efficiency of Namenode memory usage. Hence, a cache layer for storing > service metadata can reduce namenode stress. > h3. Proposed change > ApiService can separate the PUT method into two PUT methods for configuration > changes vs operation changes. New API could look like: > {code} > @PUT > /ws/v1/services/[service_name]/spec > Request Data: > { > "name": "amp", > "components": [ > { > "name": "mysql", > "number_of_containers": 2, > "artifact": { > "id": "centos/mysql-57-centos7:latest", > "type": "DOCKER" > }, > "run_privileged_container": false, > "launch_command": "", > "resource": { > "cpus": 1, > "memory": "2048" > }, > "configuration": { > "env": { > "MYSQL_USER":"${USER}", > "MYSQL_PASSWORD":"password" > } > } > } > ], > "quicklinks": { > "Apache Document Root": > "http://httpd.${SERVICE_NAME}.${USER}.${DOMAIN}:8080/;, > "PHP MyAdmin": "http://phpmyadmin.${SERVICE_NAME}.${USER}.${DOMAIN}:8080/; > } > } > {code} > {code} > @PUT > /ws/v1/services/[service_name]/state > Request data: > { > "name": "amp", > "components": [ > { > "name": "mysql", > "state": "STOPPED" > } > ] > } > {code} > SOLR can be used to cache Yarnfile to improve lookup performance and reduce > stress of namenode small file problems and high frequency lookup. SOLR is > chosen for caching metadata because its indexing feature can be used to build > full text search for application catalog as well. > For service that requires configuration changes to increase or decrease node > count. The calling sequence is: > {code} > # GET /ws/v1/services/{service_name}/spec > # Change number_of_containers to desired number. > # PUT /ws/v1/services/{service_name}/spec to update the spec. > # PUT /ws/v1/services/{service_name}/state to stop existing service. > # PUT /ws/v1/services/{service_name}/state to start service. > {code} >
[jira] [Updated] (YARN-7399) Yarn services metadata storage improvement
[ https://issues.apache.org/jira/browse/YARN-7399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Yang updated YARN-7399: Description: In Slider, metadata is stored in user's home directory. Slider command line interface interacts with HDFS directly to list deployed applications and invoke YARN API or HDFS API to provide information to user. This design works for a single user manage his/her own applications. When this design has been ported to Yarn services, it becomes apparent that this design is difficult to list all deployed applications on Hadoop cluster for administrator to manage applications. Resource Manager needs to crawl through every user's home directory to compile metadata about deployed applications. This can trigger high load on namenode to list hundreds or thousands of list directory calls owned by different users. Hence, it might be best to centralize the metadata storage to Solr or HBase to reduce number of IO calls to namenode for manage applications. In Slider, one application is composed of metainfo, specifications in json, and payload of zip file that contains application code and deployment code. Both meta information, and zip file payload are stored in the same application directory in HDFS. This works well for distributed applications without central application manager that oversee all application. In the next generation of application management, we like to centralize metainfo and specifications in json to a centralized storage managed by YARN user, and keep the payload zip file in user's home directory or in docker registry. This arrangement can provide a faster lookup for metainfo when we list all deployed applications and services on YARN dashboard. When we centralize metainfo to YARN user, we also need to build ACL to enforce who can manage applications, and make update. The current proposal is: yarn.admin.acl - list of groups that can submit/reconfigure/pause/kill all applications normal users - submit/reconfigure/pause/kill his/her own applications was:In Slider, metadata is stored in user's home directory. Slider command line interface interacts with HDFS directly to list deployed applications and invoke YARN API or HDFS API to provide information to user. This design works for a single user manage his/her own applications. When this design has been ported to Yarn services, it becomes apparent that this design is difficult to list all deployed applications on Hadoop cluster for administrator to manage applications. Resource Manager needs to crawl through every user's home directory to compile metadata about deployed applications. This can trigger high load on namenode to list hundreds or thousands of list directory calls owned by different users. Hence, it might be best to centralize the metadata storage to Solr or HBase to reduce number of IO calls to namenode for manage applications. > Yarn services metadata storage improvement > -- > > Key: YARN-7399 > URL: https://issues.apache.org/jira/browse/YARN-7399 > Project: Hadoop YARN > Issue Type: Improvement > Components: yarn-native-services >Reporter: Eric Yang > > In Slider, metadata is stored in user's home directory. Slider command line > interface interacts with HDFS directly to list deployed applications and > invoke YARN API or HDFS API to provide information to user. This design works > for a single user manage his/her own applications. When this design has been > ported to Yarn services, it becomes apparent that this design is difficult to > list all deployed applications on Hadoop cluster for administrator to manage > applications. Resource Manager needs to crawl through every user's home > directory to compile metadata about deployed applications. This can trigger > high load on namenode to list hundreds or thousands of list directory calls > owned by different users. Hence, it might be best to centralize the metadata > storage to Solr or HBase to reduce number of IO calls to namenode for manage > applications. > In Slider, one application is composed of metainfo, specifications in json, > and payload of zip file that contains application code and deployment code. > Both meta information, and zip file payload are stored in the same > application directory in HDFS. This works well for distributed applications > without central application manager that oversee all application. > In the next generation of application management, we like to centralize > metainfo and specifications in json to a centralized storage managed by YARN > user, and keep the payload zip file in user's home directory or in docker > registry. This arrangement can provide a faster lookup for metainfo when we > list all deployed applications and services on YARN dashboard. > When we centralize metainfo to YARN
[jira] [Created] (YARN-7399) Yarn services metadata storage improvement
Eric Yang created YARN-7399: --- Summary: Yarn services metadata storage improvement Key: YARN-7399 URL: https://issues.apache.org/jira/browse/YARN-7399 Project: Hadoop YARN Issue Type: Improvement Components: yarn-native-services Reporter: Eric Yang In Slider, metadata is stored in user's home directory. Slider command line interface interacts with HDFS directly to list deployed applications and invoke YARN API or HDFS API to provide information to user. This design works for a single user manage his/her own applications. When this design has been ported to Yarn services, it becomes apparent that this design is difficult to list all deployed applications on Hadoop cluster for administrator to manage applications. Resource Manager needs to crawl through every user's home directory to compile metadata about deployed applications. This can trigger high load on namenode to list hundreds or thousands of list directory calls owned by different users. Hence, it might be best to centralize the metadata storage to Solr or HBase to reduce number of IO calls to namenode for manage applications. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7336) Unsafe cast from long to int Resource.hashCode() method
[ https://issues.apache.org/jira/browse/YARN-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221132#comment-16221132 ] Hadoop QA commented on YARN-7336: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 12s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 48s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 40m 13s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7336 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894190/YARN-7336.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3f598b371fb9 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 25932da | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/18162/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/18162/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Unsafe cast from long to int Resource.hashCode() method >
[jira] [Commented] (YARN-7217) Improve API service usability for updating service spec and state
[ https://issues.apache.org/jira/browse/YARN-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221127#comment-16221127 ] Eric Yang commented on YARN-7217: - [~jianhe] . Thank you for reviewing the patch, and here are the answers: {quote} - should solr and fs be a pluggable implementation of a common interface ? Basically, should it be either fs or solr back-end. Right now it's both there. {quote} This JIRA is a transition phase. Solr is used as alternate storage mechanism to bridge the gap that current HDFS storage mechanism can not achieve for listing applications for all users. Let's leave the storage change to another JIRA. {quote} - getServicesList: it assumes solr is enabled, if not, it will throw NPE. I think we should conditionally check if solr is enabled, if not, throw exception saying only solr backend is supported for this endpoint. {quote} getServicesList never throw NPE. Ysc is initialized at constructor. If Solr is disabled, it will throw SERVICE_UNAVAILABLE http code. This is verified in testGetServicesList in TestApiServer test case. {quote} - similarly for getServiceSpec endpoint, it will throw NPE because ysc is null, if solr is not enabled. {quote} Same problem as above, ysc is never null because it is initialized in the constructor. If ysc is not initialized when Solr is disabled per suggestion, then NPE situation can occur. I agree that the coding style can be more consistent on how SOLR is enabled, and revise code accordingly. {quote} - similarly TestYarnNativeServices#testChangeSpec, as discussed, we won't need to restart the entire service to update the spec ? what's the use case for this ? {quote} Per discussion this morning, it is best to keep configuration change and restart operation as two separate calls. This allows configuration to be updated and hold off on deployment until suitable time window becomes available, then restart the service. This gives system administrator more fine grind control to persist desired configuration change, then choose to restart service or choose to add more nodes without restart. {quote} - Should it be if solr is enabled, create the solrClient ? if solr is not enabled, there's no point creating the solrClient {quote} Solr enabled flag is persisted in YarnSolrClient object to keep its internal state atomic instead of tracking the flag in ServiceClient. I can add if statement to skip initialization of yarn solr client. However, it seems redundant to have to deal with NPE in if statements, if YarnSolrClient skipped initialization. Hence, I will not make change here. {quote} - updateComponent api should also update the spec in solr ? - the username parameter is not used in findAppEntry API at all, but the deployApp inserts the username, then why is the username required in the first place ? - similarly, username is not used in deleteApp, then why do we need to get the username in caller in the first place {quote} I will fix these bugs. {quote} All services configs are currently in YarnServiceConf class, I think we can put the new configs there to not mix with the core YarnConfigurations, until the feature and config namings are stable, we can merge them back to YarnConfiguration. {quote} We should avoid to introduce sub configuration without expose them to upper level. The chance of someone else introduce duplicate hierarchy is high, then it becomes painful to merge. I recommend to upstream the configuration knobs to upper level to avoid doing the same thing over and over. This is difference in philosophy of how to handle changes, since we are already on a branch, there is no risk to introduce to yarh-common directly. I will not make a change here. {quote} could you explain below logic ? looks like it tries to look for all entries with "id:appName" and the while loop continues until the last one is find, and return the last one . Presumbaly there's only 1 entry, then why is a while loop required? If there are multiple entries, why returning the last one ? {quote} There will only be one match because this is a single entry query. However, Solr doesn't have a single entry lookup interface, and I just use common Iterator interface provided by Solr. This is the reason that it is in a while loop. I can change it to if .. else to make it more readable. Thanks for the suggestions, I will make the improvements and upload another patch. Let me know if there is any doubts in my comments. Thanks > Improve API service usability for updating service spec and state > - > > Key: YARN-7217 > URL: https://issues.apache.org/jira/browse/YARN-7217 > Project: Hadoop YARN > Issue Type: Task > Components: api, applications >Reporter: Eric Yang >Assignee: Eric Yang >
[jira] [Commented] (YARN-7307) Revisit resource-types.xml loading behaviors
[ https://issues.apache.org/jira/browse/YARN-7307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221115#comment-16221115 ] Hadoop QA commented on YARN-7307: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 50s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 54s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 35s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 6s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 2s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 9 new + 259 unchanged - 1 fixed = 268 total (was 260) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 47s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 38s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 41s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 46m 27s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 17s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}161m 33s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:5b98639 |