[jira] [Commented] (YARN-5728) TestMiniYarnClusterNodeUtilization.testUpdateNodeUtilization timeout
[ https://issues.apache.org/jira/browse/YARN-5728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106014#comment-16106014 ] Ray Chiang commented on YARN-5728: -- Sorry for the delay. The change looks fine to me. +1 > TestMiniYarnClusterNodeUtilization.testUpdateNodeUtilization timeout > > > Key: YARN-5728 > URL: https://issues.apache.org/jira/browse/YARN-5728 > Project: Hadoop YARN > Issue Type: Bug > Components: test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: YARN-5728.001.patch, YARN-5728.002.patch, > YARN-5728.01.patch > > > TestMiniYARNClusterNodeUtilization.testUpdateNodeUtilization is failing by > timeout. > https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/192/testReport/junit/org.apache.hadoop.yarn.server/TestMiniYarnClusterNodeUtilization/testUpdateNodeUtilization/ > {noformat} > java.lang.Exception: test timed out after 6 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.processWaitTimeAndRetryInfo(RetryInvocationHandler.java:130) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:107) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) > at com.sun.proxy.$Proxy85.nodeHeartbeat(Unknown Source) > at > org.apache.hadoop.yarn.server.TestMiniYarnClusterNodeUtilization.testUpdateNodeUtilization(TestMiniYarnClusterNodeUtilization.java:113) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6902) Update Microsoft JDBC Driver for SQL Server version in License.txt
[ https://issues.apache.org/jira/browse/YARN-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106010#comment-16106010 ] Botong Huang commented on YARN-6902: Thanks [~subru]! > Update Microsoft JDBC Driver for SQL Server version in License.txt > -- > > Key: YARN-6902 > URL: https://issues.apache.org/jira/browse/YARN-6902 > Project: Hadoop YARN > Issue Type: Sub-task > Components: federation >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Fix For: YARN-2915 > > Attachments: YARN-6902-YARN-2915.v1.patch > > > Update Microsoft JDBC Driver for SQL Server version in License.txt -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6901) A CapacityScheduler app->LeafQueue deadlock found in branch-2.8
[ https://issues.apache.org/jira/browse/YARN-6901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105970#comment-16105970 ] Hadoop QA commented on YARN-6901: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-2.8 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 48s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 10s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_131 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 18s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 88 unchanged - 1 fixed = 89 total (was 89) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 24s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 12s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_131. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}171m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | | org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.getParent() is unsynchronized, org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.setParent(CSQueue) is synchronized At AbstractCSQueue.java:synchronized At
[jira] [Commented] (YARN-6902) Update Microsoft JDBC Driver for SQL Server version in License.txt
[ https://issues.apache.org/jira/browse/YARN-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105908#comment-16105908 ] Hadoop QA commented on YARN-6902: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} YARN-2915 Compile Tests {color} || || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 0m 43s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6902 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12879460/YARN-6902-YARN-2915.v1.patch | | Optional Tests | asflicense | | uname | Linux e89bb7d4d925 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-2915 / 47e9b3f | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/16606/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Update Microsoft JDBC Driver for SQL Server version in License.txt > -- > > Key: YARN-6902 > URL: https://issues.apache.org/jira/browse/YARN-6902 > Project: Hadoop YARN > Issue Type: Sub-task > Components: federation >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6902-YARN-2915.v1.patch > > > Update Microsoft JDBC Driver for SQL Server version in License.txt -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6902) Update Microsoft JDBC Driver for SQL Server version in License.txt
[ https://issues.apache.org/jira/browse/YARN-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105901#comment-16105901 ] Hadoop QA commented on YARN-6902: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} YARN-2915 Compile Tests {color} || || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 0m 51s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6902 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12879460/YARN-6902-YARN-2915.v1.patch | | Optional Tests | asflicense | | uname | Linux b6d7f5009661 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-2915 / 47e9b3f | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/16605/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Update Microsoft JDBC Driver for SQL Server version in License.txt > -- > > Key: YARN-6902 > URL: https://issues.apache.org/jira/browse/YARN-6902 > Project: Hadoop YARN > Issue Type: Sub-task > Components: federation >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6902-YARN-2915.v1.patch > > > Update Microsoft JDBC Driver for SQL Server version in License.txt -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6902) Update Microsoft JDBC Driver for SQL Server version in License.txt
[ https://issues.apache.org/jira/browse/YARN-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated YARN-6902: - Component/s: federation > Update Microsoft JDBC Driver for SQL Server version in License.txt > -- > > Key: YARN-6902 > URL: https://issues.apache.org/jira/browse/YARN-6902 > Project: Hadoop YARN > Issue Type: Sub-task > Components: federation >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6902-YARN-2915.v1.patch > > > Update Microsoft JDBC Driver for SQL Server version in License.txt -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6902) Update Microsoft JDBC Driver for SQL Server version in License.txt
[ https://issues.apache.org/jira/browse/YARN-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated YARN-6902: - Description: Update Microsoft JDBC Driver for SQL Server version in License.txt (was: Microsoft JDBC Driver for SQL Server) > Update Microsoft JDBC Driver for SQL Server version in License.txt > -- > > Key: YARN-6902 > URL: https://issues.apache.org/jira/browse/YARN-6902 > Project: Hadoop YARN > Issue Type: Sub-task > Components: federation >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6902-YARN-2915.v1.patch > > > Update Microsoft JDBC Driver for SQL Server version in License.txt -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6902) Update Microsoft JDBC Driver for SQL Server version in License.txt
[ https://issues.apache.org/jira/browse/YARN-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated YARN-6902: - Description: Microsoft JDBC Driver for SQL Server > Update Microsoft JDBC Driver for SQL Server version in License.txt > -- > > Key: YARN-6902 > URL: https://issues.apache.org/jira/browse/YARN-6902 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6902-YARN-2915.v1.patch > > > Microsoft JDBC Driver for SQL Server -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6902) Update Microsoft JDBC Driver for SQL Server version in License.txt
[ https://issues.apache.org/jira/browse/YARN-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated YARN-6902: - Summary: Update Microsoft JDBC Driver for SQL Server version in License.txt (was: Update SQL server note in License.txt) > Update Microsoft JDBC Driver for SQL Server version in License.txt > -- > > Key: YARN-6902 > URL: https://issues.apache.org/jira/browse/YARN-6902 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6902-YARN-2915.v1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6902) Update SQL server note in License.txt
[ https://issues.apache.org/jira/browse/YARN-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Botong Huang updated YARN-6902: --- Attachment: (was: YARN-6902-YARN-2915.patch) > Update SQL server note in License.txt > - > > Key: YARN-6902 > URL: https://issues.apache.org/jira/browse/YARN-6902 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6902-YARN-2915.v1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6902) Update SQL server note in License.txt
[ https://issues.apache.org/jira/browse/YARN-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Botong Huang updated YARN-6902: --- Attachment: YARN-6902-YARN-2915.v1.patch > Update SQL server note in License.txt > - > > Key: YARN-6902 > URL: https://issues.apache.org/jira/browse/YARN-6902 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6902-YARN-2915.v1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6902) Update SQL server note in License.txt
[ https://issues.apache.org/jira/browse/YARN-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Botong Huang updated YARN-6902: --- Attachment: YARN-6902-YARN-2915.patch > Update SQL server note in License.txt > - > > Key: YARN-6902 > URL: https://issues.apache.org/jira/browse/YARN-6902 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6902-YARN-2915.v1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6902) Update SQL server note in License.txt
[ https://issues.apache.org/jira/browse/YARN-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Botong Huang updated YARN-6902: --- Issue Type: Sub-task (was: Task) Parent: YARN-2915 > Update SQL server note in License.txt > - > > Key: YARN-6902 > URL: https://issues.apache.org/jira/browse/YARN-6902 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6902) Update SQL server note in License.txt
Botong Huang created YARN-6902: -- Summary: Update SQL server note in License.txt Key: YARN-6902 URL: https://issues.apache.org/jira/browse/YARN-6902 Project: Hadoop YARN Issue Type: Task Reporter: Botong Huang Assignee: Botong Huang Priority: Minor -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6853) Add MySql Scripts for FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105888#comment-16105888 ] Hadoop QA commented on YARN-6853: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} YARN-2915 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 45s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 58s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 27s{color} | {color:green} YARN-2915 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 33s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6853 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12879455/YARN-6853-YARN-2915.v3.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 7571c59a3a0d 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-2915 / 47e9b3f | | modules | C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: hadoop-yarn-project/hadoop-yarn | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/16604/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add MySql Scripts for FederationStateStore > -- > > Key: YARN-6853 > URL: https://issues.apache.org/jira/browse/YARN-6853 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6853.v1.patch, YARN-6853-YARN-2915.v1.patch, > YARN-6853-YARN-2915.v2.patch, YARN-6853-YARN-2915.v3.patch > > > In YARN-3663 we added the SQL scripts for SQLServer. We want to add the MySql > scripts to be able to run Federation with a MySQL servers which will be less > performant but convenient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6861) Reader API for sub application entities
[ https://issues.apache.org/jira/browse/YARN-6861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105876#comment-16105876 ] Vrushali C edited comment on YARN-6861 at 7/28/17 11:58 PM: Thanks [~rohithsharma] for the patch! I had some basic questions and a few javadoc related nits: - we need to add the new rest api in the documentation TimelineServiceV2.md file In SubApplicationEntityReader.java - update the class javadoc comment for L64 Change Entitytable => Subapp table - update L88 Entitytable => Subapp table - Am trying figure out if there is any difference between SubApplicationEntityReader # readKeyValuePairs and GenericEntityReader# readKeyValuePairs? If not, it can be reused? I think the other readers are using the readKeyValuePairs from GenericEntityReader. - also there is a stub method getResult at L 613. Trying to understand why is it needed. In TimelineReaderWebServices - Could we add a comment above the function saying this REST endpoint returns sub application entities? I think it may not be clear to other developers that this isn’t the regular entities but is querying for sub app entities. - If I am understanding correctly, if the doAsUser is set in the context, this fetches the sub application entities, else it will return the Generic entity reader in TimelineEntityReaderFactory? So is the understanding correct that this API will NOT invoke the sub application table explicitly if the UGI does not have the doAsUser set? If so, it’s sort of happening behind the scenes. I am not sure if this is a very clean way. I am wondering how will this work from a browser if I as a user want to fetch sub app entities for another user, say Varun? Perhaps just accept the sub app user id as a Query param just like we accept userId? Also perhaps we can call the end point something else so that it’s clear that we have sub application entities not regular entities? was (Author: vrushalic): Thanks [~rohithsharma] for the patch! I had some basic questions and a few javadoc related nits: - we need to add the new rest api in the documentation TimelineServiceV2.md file In SubApplicationEntityReader.java - update the class javadoc comment for L64 Change Entitytable => Subapp table - update L88 Entitytable => Subapp table - Am trying figure out if there is any difference between SubApplicationEntityReader # readKeyValuePairs and GenericEntityReader# readKeyValuePairs? If not, it can be reused? I think the other readers are using the readKeyValuePairs from GenericEntityReader. - also there is a stub method getResult at L 613. Trying to understand why is it needed. In TimelineReaderWebServices - Could we add a comment above the function saying this REST endpoint returns sub application entities? I think it may not be clear to other developers that this isn’t the regular entities but is querying for sub app entities. - If I am understanding correctly, if the doAsUser is set in the context, this fetches the sub application entities, else it will return the Generic entity reader in TimelineEntityReaderFactory? So is the understanding correct that this API will NOT invoke the sub application table explicitly if the UGI does not have the doAsUser set? If so, it’s sort of happening behind the scenes. I am not sure if this is a very clean way. I am wondering wow will this work from a browser if I as a user want to fetch sub app entities for say, user Varun? Perhaps just accept the sub app user id as a Query param just like we accept userId? Also perhaps we can call the end point something else so that it’s clear that we have sub application entities not regular entities? > Reader API for sub application entities > --- > > Key: YARN-6861 > URL: https://issues.apache.org/jira/browse/YARN-6861 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: YARN-6861-YARN-5355.001.patch > > > YARN-6733 and YARN-6734 writes data into sub application table. There should > be a way to read those entities. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6870) Fix floating point inaccuracies in resource availability check in AllocationBasedResourceUtilizationTracker
[ https://issues.apache.org/jira/browse/YARN-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105877#comment-16105877 ] Hudson commented on YARN-6870: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12074 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12074/]) YARN-6870. Fix floating point inaccuracies in resource availability (arun suresh: rev 890e14c02a612c772cecd5dff2411060efd418a3) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/ContainerScheduler.java * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestAllocationBasedResourceUtilizationTracker.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/AllocationBasedResourceUtilizationTracker.java > Fix floating point inaccuracies in resource availability check in > AllocationBasedResourceUtilizationTracker > --- > > Key: YARN-6870 > URL: https://issues.apache.org/jira/browse/YARN-6870 > Project: Hadoop YARN > Issue Type: Bug > Components: api, nodemanager >Reporter: Brook Zhou >Assignee: Brook Zhou > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: YARN-6870-v0.patch, YARN-6870-v1.patch, > YARN-6870-v2.patch, YARN-6870-v3.patch > > > We have seen issues on our clusters where the current way of computing CPU > usage is having float-arithmetic inaccuracies (the bug is still there in > trunk) > Simple program to illustrate: > {code:title=Bar.java|borderStyle=solid} > public static void main(String[] args) throws Exception { > float result = 0.0f; > for (int i = 0; i < 7; i++) { > if (i == 6) { > result += (float) 4 / (float)18; > } else { > result += (float) 2 / (float)18; > } > } > for (int i = 0; i < 7; i++) { > if (i == 6) { > result -= (float) 4 / (float)18; > } else { > result -= (float) 2 / (float)18; > } > } > System.out.println(result); > } > {code} > // Printed > 4.4703484E-8 > 2017-04-12 05:43:24,014 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: > Not enough cpu for [container_e3295_1491978508342_0467_01_30], Current > CPU Allocation: [0.891], Requested CPU Allocation: [0.] > There are a few places with this issue: > 1. ResourceUtilization.java - set/getCPU both use float. When > ContainerScheduler calls > ContainersMonitor.increase/decreaseResourceUtilization, this may lead to > issues. > 2. AllocationBasedResourceUtilizationTracker.java - hasResourcesAvailable > uses float as well for CPU computation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6861) Reader API for sub application entities
[ https://issues.apache.org/jira/browse/YARN-6861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105876#comment-16105876 ] Vrushali C commented on YARN-6861: -- Thanks [~rohithsharma] for the patch! I had some basic questions and a few javadoc related nits: - we need to add the new rest api in the documentation TimelineServiceV2.md file In SubApplicationEntityReader.java - update the class javadoc comment for L64 Change Entitytable => Subapp table - update L88 Entitytable => Subapp table - Am trying figure out if there is any difference between SubApplicationEntityReader # readKeyValuePairs and GenericEntityReader# readKeyValuePairs? If not, it can be reused? I think the other readers are using the readKeyValuePairs from GenericEntityReader. - also there is a stub method getResult at L 613. Trying to understand why is it needed. In TimelineReaderWebServices - Could we add a comment above the function saying this REST endpoint returns sub application entities? I think it may not be clear to other developers that this isn’t the regular entities but is querying for sub app entities. - If I am understanding correctly, if the doAsUser is set in the context, this fetches the sub application entities, else it will return the Generic entity reader in TimelineEntityReaderFactory? So is the understanding correct that this API will NOT invoke the sub application table explicitly if the UGI does not have the doAsUser set? If so, it’s sort of happening behind the scenes. I am not sure if this is a very clean way. I am wondering wow will this work from a browser if I as a user want to fetch sub app entities for say, user Varun? Perhaps just accept the sub app user id as a Query param just like we accept userId? Also perhaps we can call the end point something else so that it’s clear that we have sub application entities not regular entities? > Reader API for sub application entities > --- > > Key: YARN-6861 > URL: https://issues.apache.org/jira/browse/YARN-6861 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: YARN-6861-YARN-5355.001.patch > > > YARN-6733 and YARN-6734 writes data into sub application table. There should > be a way to read those entities. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6875) New aggregated log file format for YARN log aggregation.
[ https://issues.apache.org/jira/browse/YARN-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105871#comment-16105871 ] Wangda Tan commented on YARN-6875: -- Thanks for comments from [~jlowe]/[~xgong]. I think I misled Jason before, we didn't plan to add the separate index design at beginning, but we figured out it is required for recovery. I agree the points from Jason: - Log files are rarely read after write. - Creation of a separate index file during write means 2x workload of Namenode. However, if we don't write the (temp) index file, and the approach listed in Jason's comment will make read become very slow since it need to repeatedly find where's the last successful write. And the worst part is, we only need to read logs when app fails or slow, it will be likely that we will read such app logs for a couple of times. I don't think it will be a good user experience to do this every-time. I agree with comments from Xuan, if partial log aggregation is not enabled, this design doesn't increase any workload. [~jlowe], what's the percentage of apps running in your cluster which enabled partial log aggregation? For partial log aggregation case, an alternative solution is to write log+index to a separate file every time, which makes write perf exactly same as TFile, but read performance can be much better. Jason, could you share your thoughts here? > New aggregated log file format for YARN log aggregation. > > > Key: YARN-6875 > URL: https://issues.apache.org/jira/browse/YARN-6875 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-6875-NewLogAggregationFormat-design-doc.pdf > > > T-file is the underlying log format for the aggregated logs in YARN. We have > seen several performance issues, especially for very large log files. > We will introduce a new log format which have better performance for large > log files. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6853) Add MySql Scripts for FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105867#comment-16105867 ] Giovanni Matteo Fumarola commented on YARN-6853: Thanks [~subru] for the review. Actually MySQL's license is GPL 2.0 and I cannot add it to the dependency list. I updated in V3 the documentation about where to download and which version. > Add MySql Scripts for FederationStateStore > -- > > Key: YARN-6853 > URL: https://issues.apache.org/jira/browse/YARN-6853 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6853.v1.patch, YARN-6853-YARN-2915.v1.patch, > YARN-6853-YARN-2915.v2.patch, YARN-6853-YARN-2915.v3.patch > > > In YARN-3663 we added the SQL scripts for SQLServer. We want to add the MySql > scripts to be able to run Federation with a MySQL servers which will be less > performant but convenient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6853) Add MySql Scripts for FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola updated YARN-6853: --- Attachment: YARN-6853-YARN-2915.v3.patch > Add MySql Scripts for FederationStateStore > -- > > Key: YARN-6853 > URL: https://issues.apache.org/jira/browse/YARN-6853 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6853.v1.patch, YARN-6853-YARN-2915.v1.patch, > YARN-6853-YARN-2915.v2.patch, YARN-6853-YARN-2915.v3.patch > > > In YARN-3663 we added the SQL scripts for SQLServer. We want to add the MySql > scripts to be able to run Federation with a MySQL servers which will be less > performant but convenient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6870) Fix floating point inaccuracies in resource availability check in AllocationBasedResourceUtilizationTracker
[ https://issues.apache.org/jira/browse/YARN-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-6870: -- Fix Version/s: 3.0.0-beta1 2.9.0 > Fix floating point inaccuracies in resource availability check in > AllocationBasedResourceUtilizationTracker > --- > > Key: YARN-6870 > URL: https://issues.apache.org/jira/browse/YARN-6870 > Project: Hadoop YARN > Issue Type: Bug > Components: api, nodemanager >Reporter: Brook Zhou >Assignee: Brook Zhou > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: YARN-6870-v0.patch, YARN-6870-v1.patch, > YARN-6870-v2.patch, YARN-6870-v3.patch > > > We have seen issues on our clusters where the current way of computing CPU > usage is having float-arithmetic inaccuracies (the bug is still there in > trunk) > Simple program to illustrate: > {code:title=Bar.java|borderStyle=solid} > public static void main(String[] args) throws Exception { > float result = 0.0f; > for (int i = 0; i < 7; i++) { > if (i == 6) { > result += (float) 4 / (float)18; > } else { > result += (float) 2 / (float)18; > } > } > for (int i = 0; i < 7; i++) { > if (i == 6) { > result -= (float) 4 / (float)18; > } else { > result -= (float) 2 / (float)18; > } > } > System.out.println(result); > } > {code} > // Printed > 4.4703484E-8 > 2017-04-12 05:43:24,014 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: > Not enough cpu for [container_e3295_1491978508342_0467_01_30], Current > CPU Allocation: [0.891], Requested CPU Allocation: [0.] > There are a few places with this issue: > 1. ResourceUtilization.java - set/getCPU both use float. When > ContainerScheduler calls > ContainersMonitor.increase/decreaseResourceUtilization, this may lead to > issues. > 2. AllocationBasedResourceUtilizationTracker.java - hasResourcesAvailable > uses float as well for CPU computation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6875) New aggregated log file format for YARN log aggregation.
[ https://issues.apache.org/jira/browse/YARN-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105859#comment-16105859 ] Xuan Gong commented on YARN-6875: - Thanks for the comments. [~jlowe]. I fully understand your consideration. But, bq. I'm not a big fan of having a separate file, even temporarily, because log aggregation can already be a large portion of the namenode's write load on large clusters. Having that separate file will increase the namenode write load significantly (approximately 2x per log aggregation cycle if I understand it correctly). I agree with this. But the proposed solution will not be worse than current solution (TFile). Also, the index file will be created only when the partially log aggregation is enabled. If we enable partially log aggregation: * For T-File solution (currently used), we would create a new file every time we do the log aggregation. If we have done log aggregation three times, we would have three T-Files * For the proposed solution, at most, we would have two files: the log file and index file. bq. Note that the separate index file doesn't solve all the race conditions for the reader. Yes, this corn case is valid. But I think that this is OK. The reader would fail in this case, but we can always retry the reader later. > New aggregated log file format for YARN log aggregation. > > > Key: YARN-6875 > URL: https://issues.apache.org/jira/browse/YARN-6875 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-6875-NewLogAggregationFormat-design-doc.pdf > > > T-file is the underlying log format for the aggregated logs in YARN. We have > seen several performance issues, especially for very large log files. > We will introduce a new log format which have better performance for large > log files. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6870) Fix floating point inaccuracies in resource availability check in AllocationBasedResourceUtilizationTracker
[ https://issues.apache.org/jira/browse/YARN-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-6870: -- Summary: Fix floating point inaccuracies in resource availability check in AllocationBasedResourceUtilizationTracker (was: Fix resource availability check in ) > Fix floating point inaccuracies in resource availability check in > AllocationBasedResourceUtilizationTracker > --- > > Key: YARN-6870 > URL: https://issues.apache.org/jira/browse/YARN-6870 > Project: Hadoop YARN > Issue Type: Bug > Components: api, nodemanager >Reporter: Brook Zhou >Assignee: Brook Zhou > Attachments: YARN-6870-v0.patch, YARN-6870-v1.patch, > YARN-6870-v2.patch, YARN-6870-v3.patch > > > We have seen issues on our clusters where the current way of computing CPU > usage is having float-arithmetic inaccuracies (the bug is still there in > trunk) > Simple program to illustrate: > {code:title=Bar.java|borderStyle=solid} > public static void main(String[] args) throws Exception { > float result = 0.0f; > for (int i = 0; i < 7; i++) { > if (i == 6) { > result += (float) 4 / (float)18; > } else { > result += (float) 2 / (float)18; > } > } > for (int i = 0; i < 7; i++) { > if (i == 6) { > result -= (float) 4 / (float)18; > } else { > result -= (float) 2 / (float)18; > } > } > System.out.println(result); > } > {code} > // Printed > 4.4703484E-8 > 2017-04-12 05:43:24,014 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: > Not enough cpu for [container_e3295_1491978508342_0467_01_30], Current > CPU Allocation: [0.891], Requested CPU Allocation: [0.] > There are a few places with this issue: > 1. ResourceUtilization.java - set/getCPU both use float. When > ContainerScheduler calls > ContainersMonitor.increase/decreaseResourceUtilization, this may lead to > issues. > 2. AllocationBasedResourceUtilizationTracker.java - hasResourcesAvailable > uses float as well for CPU computation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6900) ZooKeeper based implementation of the FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Inigo Goiri updated YARN-6900: -- Attachment: YARN-6900-YARN-2915-000.patch Proposal using curator. > ZooKeeper based implementation of the FederationStateStore > -- > > Key: YARN-6900 > URL: https://issues.apache.org/jira/browse/YARN-6900 > Project: Hadoop YARN > Issue Type: Sub-task > Components: federation, nodemanager, resourcemanager >Reporter: Subru Krishnan >Assignee: Inigo Goiri > Attachments: YARN-6900-YARN-2915-000.patch > > > YARN-5408 defines the unified {{FederationStateStore}} API. Currently we only > support SQL based stores, this JIRA tracks adding a ZooKeeper based > implementation for simplifying deployment as it's already popularly used for > {{RMStateStore}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6870) Fix resource availability check in
[ https://issues.apache.org/jira/browse/YARN-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-6870: -- Summary: Fix resource availability check in (was: Fix AllocationBasedResourceUtilizationTracker::ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a float, which is imprecise) > Fix resource availability check in > --- > > Key: YARN-6870 > URL: https://issues.apache.org/jira/browse/YARN-6870 > Project: Hadoop YARN > Issue Type: Bug > Components: api, nodemanager >Reporter: Brook Zhou >Assignee: Brook Zhou > Attachments: YARN-6870-v0.patch, YARN-6870-v1.patch, > YARN-6870-v2.patch, YARN-6870-v3.patch > > > We have seen issues on our clusters where the current way of computing CPU > usage is having float-arithmetic inaccuracies (the bug is still there in > trunk) > Simple program to illustrate: > {code:title=Bar.java|borderStyle=solid} > public static void main(String[] args) throws Exception { > float result = 0.0f; > for (int i = 0; i < 7; i++) { > if (i == 6) { > result += (float) 4 / (float)18; > } else { > result += (float) 2 / (float)18; > } > } > for (int i = 0; i < 7; i++) { > if (i == 6) { > result -= (float) 4 / (float)18; > } else { > result -= (float) 2 / (float)18; > } > } > System.out.println(result); > } > {code} > // Printed > 4.4703484E-8 > 2017-04-12 05:43:24,014 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: > Not enough cpu for [container_e3295_1491978508342_0467_01_30], Current > CPU Allocation: [0.891], Requested CPU Allocation: [0.] > There are a few places with this issue: > 1. ResourceUtilization.java - set/getCPU both use float. When > ContainerScheduler calls > ContainersMonitor.increase/decreaseResourceUtilization, this may lead to > issues. > 2. AllocationBasedResourceUtilizationTracker.java - hasResourcesAvailable > uses float as well for CPU computation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6870) Fix AllocationBasedResourceUtilizationTracker::ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a float, which is imprecise
[ https://issues.apache.org/jira/browse/YARN-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-6870: -- Summary: Fix AllocationBasedResourceUtilizationTracker::ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a float, which is imprecise (was: Fix ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a float, which is imprecise) > Fix > AllocationBasedResourceUtilizationTracker::ResourceUtilization/ContainersMonitorImpl > is calculating CPU utilization as a float, which is imprecise > -- > > Key: YARN-6870 > URL: https://issues.apache.org/jira/browse/YARN-6870 > Project: Hadoop YARN > Issue Type: Bug > Components: api, nodemanager >Reporter: Brook Zhou >Assignee: Brook Zhou > Attachments: YARN-6870-v0.patch, YARN-6870-v1.patch, > YARN-6870-v2.patch, YARN-6870-v3.patch > > > We have seen issues on our clusters where the current way of computing CPU > usage is having float-arithmetic inaccuracies (the bug is still there in > trunk) > Simple program to illustrate: > {code:title=Bar.java|borderStyle=solid} > public static void main(String[] args) throws Exception { > float result = 0.0f; > for (int i = 0; i < 7; i++) { > if (i == 6) { > result += (float) 4 / (float)18; > } else { > result += (float) 2 / (float)18; > } > } > for (int i = 0; i < 7; i++) { > if (i == 6) { > result -= (float) 4 / (float)18; > } else { > result -= (float) 2 / (float)18; > } > } > System.out.println(result); > } > {code} > // Printed > 4.4703484E-8 > 2017-04-12 05:43:24,014 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: > Not enough cpu for [container_e3295_1491978508342_0467_01_30], Current > CPU Allocation: [0.891], Requested CPU Allocation: [0.] > There are a few places with this issue: > 1. ResourceUtilization.java - set/getCPU both use float. When > ContainerScheduler calls > ContainersMonitor.increase/decreaseResourceUtilization, this may lead to > issues. > 2. AllocationBasedResourceUtilizationTracker.java - hasResourcesAvailable > uses float as well for CPU computation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6870) Fix ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a float, which is imprecise
[ https://issues.apache.org/jira/browse/YARN-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-6870: -- Summary: Fix ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a float, which is imprecise (was: ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a float, which is imprecise) > Fix ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization > as a float, which is imprecise > --- > > Key: YARN-6870 > URL: https://issues.apache.org/jira/browse/YARN-6870 > Project: Hadoop YARN > Issue Type: Bug > Components: api, nodemanager >Reporter: Brook Zhou >Assignee: Brook Zhou > Attachments: YARN-6870-v0.patch, YARN-6870-v1.patch, > YARN-6870-v2.patch, YARN-6870-v3.patch > > > We have seen issues on our clusters where the current way of computing CPU > usage is having float-arithmetic inaccuracies (the bug is still there in > trunk) > Simple program to illustrate: > {code:title=Bar.java|borderStyle=solid} > public static void main(String[] args) throws Exception { > float result = 0.0f; > for (int i = 0; i < 7; i++) { > if (i == 6) { > result += (float) 4 / (float)18; > } else { > result += (float) 2 / (float)18; > } > } > for (int i = 0; i < 7; i++) { > if (i == 6) { > result -= (float) 4 / (float)18; > } else { > result -= (float) 2 / (float)18; > } > } > System.out.println(result); > } > {code} > // Printed > 4.4703484E-8 > 2017-04-12 05:43:24,014 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: > Not enough cpu for [container_e3295_1491978508342_0467_01_30], Current > CPU Allocation: [0.891], Requested CPU Allocation: [0.] > There are a few places with this issue: > 1. ResourceUtilization.java - set/getCPU both use float. When > ContainerScheduler calls > ContainersMonitor.increase/decreaseResourceUtilization, this may lead to > issues. > 2. AllocationBasedResourceUtilizationTracker.java - hasResourcesAvailable > uses float as well for CPU computation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6870) ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a float, which is imprecise
[ https://issues.apache.org/jira/browse/YARN-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105850#comment-16105850 ] Arun Suresh commented on YARN-6870: --- Thanks [~brookz]. will commit this shortly > ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a > float, which is imprecise > --- > > Key: YARN-6870 > URL: https://issues.apache.org/jira/browse/YARN-6870 > Project: Hadoop YARN > Issue Type: Bug > Components: api, nodemanager >Reporter: Brook Zhou >Assignee: Brook Zhou > Attachments: YARN-6870-v0.patch, YARN-6870-v1.patch, > YARN-6870-v2.patch, YARN-6870-v3.patch > > > We have seen issues on our clusters where the current way of computing CPU > usage is having float-arithmetic inaccuracies (the bug is still there in > trunk) > Simple program to illustrate: > {code:title=Bar.java|borderStyle=solid} > public static void main(String[] args) throws Exception { > float result = 0.0f; > for (int i = 0; i < 7; i++) { > if (i == 6) { > result += (float) 4 / (float)18; > } else { > result += (float) 2 / (float)18; > } > } > for (int i = 0; i < 7; i++) { > if (i == 6) { > result -= (float) 4 / (float)18; > } else { > result -= (float) 2 / (float)18; > } > } > System.out.println(result); > } > {code} > // Printed > 4.4703484E-8 > 2017-04-12 05:43:24,014 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: > Not enough cpu for [container_e3295_1491978508342_0467_01_30], Current > CPU Allocation: [0.891], Requested CPU Allocation: [0.] > There are a few places with this issue: > 1. ResourceUtilization.java - set/getCPU both use float. When > ContainerScheduler calls > ContainersMonitor.increase/decreaseResourceUtilization, this may lead to > issues. > 2. AllocationBasedResourceUtilizationTracker.java - hasResourcesAvailable > uses float as well for CPU computation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6897) Refactoring RMWebServices by moving some util methods to RMWebAppUtil
[ https://issues.apache.org/jira/browse/YARN-6897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105836#comment-16105836 ] Hudson commented on YARN-6897: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12073 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12073/]) YARN-6897. Refactoring RMWebServices by moving some util methods to (subru: rev bcde66bed1e41b5644811fe90bfbf3d56827db36) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebAppUtil.java > Refactoring RMWebServices by moving some util methods to RMWebAppUtil > - > > Key: YARN-6897 > URL: https://issues.apache.org/jira/browse/YARN-6897 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6897.v1.patch, YARN-6897.v2.patch > > > In YARN-6896 the router needs to use some methods already implemented in > {{RMWebServices}}. This jira continues the work done in YARN-6634. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6757) Refactor the usage of yarn.nodemanager.linux-container-executor.cgroups.mount-path
[ https://issues.apache.org/jira/browse/YARN-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105813#comment-16105813 ] Hadoop QA commented on YARN-6757: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 52s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 5 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 0 new + 8 unchanged - 15 fixed = 8 total (was 23) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 30s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 25s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 66m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6757 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12879435/YARN-6757.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle | | uname | Linux f29b42ace7fe
[jira] [Updated] (YARN-6897) Refactoring RMWebServices by moving some util methods to RMWebAppUtil
[ https://issues.apache.org/jira/browse/YARN-6897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated YARN-6897: - Summary: Refactoring RMWebServices by moving some util methods to RMWebAppUtil (was: Refactoring RMWebServices by moving some util methods in RMWebAppUtil) > Refactoring RMWebServices by moving some util methods to RMWebAppUtil > - > > Key: YARN-6897 > URL: https://issues.apache.org/jira/browse/YARN-6897 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6897.v1.patch, YARN-6897.v2.patch > > > In YARN-6896 the router needs to use some methods already implemented in > {{RMWebServices}}. This jira continues the work done in YARN-6634. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6901) A CapacityScheduler app->LeafQueue deadlock found in branch-2.8
[ https://issues.apache.org/jira/browse/YARN-6901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-6901: - Attachment: YARN-6901.branch-2.8.001.patch Attached ver.001 patch for branch-2.8. [~sunilg], could you help to take a look? [~jlowe]/[~eepayne], I'm not sure if this is found in your 2.8 environment, this looks like a easy-to-hit issue. > A CapacityScheduler app->LeafQueue deadlock found in branch-2.8 > > > Key: YARN-6901 > URL: https://issues.apache.org/jira/browse/YARN-6901 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Blocker > Attachments: YARN-6901.branch-2.8.001.patch > > > Stacktrace: > {code} > Thread 22068: (state = BLOCKED) > - > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.getParent() > @bci=0, line=185 (Compiled frame) > - > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.getQueuePath() > @bci=8, line=262 (Compiled frame) > - > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.AbstractContainerAllocator.getCSAssignmentFromAllocateResult(org.apache.hadoop.yarn.api.records.Resource, > > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocation, > org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) > @bci=183, line=80 (Compiled frame) > - > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.assignContainers(org.apache.hadoop.yarn.api.records.Resource, > > org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode, > > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode, > org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, > org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) > @bci=204, line=747 (Compiled frame) > - > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocator.assignContainers(org.apache.hadoop.yarn.api.records.Resource, > > org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode, > > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode, > org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, > org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) > @bci=16, line=49 (Compiled frame) > - > org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.assignContainers(org.apache.hadoop.yarn.api.records.Resource, > > org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode, > org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode, > org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) > @bci=61, line=468 (Compiled frame) > - > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(org.apache.hadoop.yarn.api.records.Resource, > > org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode, > org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode) > @bci=148, line=876 (Compiled frame) > - > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode) > @bci=157, line=1149 (Compiled frame) > - > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEvent) > @bci=266, line=1277 (Compiled frame) > > Thread 22124: (state = BLOCKED) > - > org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.getReservedContainers() > @bci=0, line=336 (Compiled frame) > - > org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.FifoCandidatesSelector.preemptFrom(org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp, > org.apache.hadoop.yarn.api.records.Resource, java.util.Map, java.util.List, > org.apache.hadoop.yarn.api.records.Resource, java.util.Map, > org.apache.hadoop.yarn.api.records.Resource) @bci=61, line=277 (Compiled > frame) > - > org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.FifoCandidatesSelector.selectCandidates(java.util.Map, > org.apache.hadoop.yarn.api.records.Resource, > org.apache.hadoop.yarn.api.records.Resource) @bci=374,
[jira] [Created] (YARN-6901) A CapacityScheduler app->LeafQueue deadlock found in branch-2.8
Wangda Tan created YARN-6901: Summary: A CapacityScheduler app->LeafQueue deadlock found in branch-2.8 Key: YARN-6901 URL: https://issues.apache.org/jira/browse/YARN-6901 Project: Hadoop YARN Issue Type: Bug Affects Versions: 2.8.0 Reporter: Wangda Tan Assignee: Wangda Tan Priority: Blocker Stacktrace: {code} Thread 22068: (state = BLOCKED) - org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.getParent() @bci=0, line=185 (Compiled frame) - org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.getQueuePath() @bci=8, line=262 (Compiled frame) - org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.AbstractContainerAllocator.getCSAssignmentFromAllocateResult(org.apache.hadoop.yarn.api.records.Resource, org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocation, org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) @bci=183, line=80 (Compiled frame) - org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.assignContainers(org.apache.hadoop.yarn.api.records.Resource, org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode, org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode, org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) @bci=204, line=747 (Compiled frame) - org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocator.assignContainers(org.apache.hadoop.yarn.api.records.Resource, org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode, org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode, org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) @bci=16, line=49 (Compiled frame) - org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.assignContainers(org.apache.hadoop.yarn.api.records.Resource, org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode, org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode, org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer) @bci=61, line=468 (Compiled frame) - org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(org.apache.hadoop.yarn.api.records.Resource, org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode, org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits, org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode) @bci=148, line=876 (Compiled frame) - org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode) @bci=157, line=1149 (Compiled frame) - org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEvent) @bci=266, line=1277 (Compiled frame) Thread 22124: (state = BLOCKED) - org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.getReservedContainers() @bci=0, line=336 (Compiled frame) - org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.FifoCandidatesSelector.preemptFrom(org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp, org.apache.hadoop.yarn.api.records.Resource, java.util.Map, java.util.List, org.apache.hadoop.yarn.api.records.Resource, java.util.Map, org.apache.hadoop.yarn.api.records.Resource) @bci=61, line=277 (Compiled frame) - org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.FifoCandidatesSelector.selectCandidates(java.util.Map, org.apache.hadoop.yarn.api.records.Resource, org.apache.hadoop.yarn.api.records.Resource) @bci=374, line=138 (Compiled frame) - org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.containerBasedPreemptOrKill(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueue, org.apache.hadoop.yarn.api.records.Resource) @bci=264, line=342 (Compiled frame) - org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.editSchedule() @bci=34, line=202 (Compiled frame) - org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.invokePolicy() @bci=4, line=81 (Compiled frame) -
[jira] [Commented] (YARN-6853) Add MySql Scripts for FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105807#comment-16105807 ] Subru Krishnan commented on YARN-6853: -- Thanks for the patch [~giovanni.fumarola], it looks the MySQL JDBC driver maven dependency is missing. > Add MySql Scripts for FederationStateStore > -- > > Key: YARN-6853 > URL: https://issues.apache.org/jira/browse/YARN-6853 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6853.v1.patch, YARN-6853-YARN-2915.v1.patch, > YARN-6853-YARN-2915.v2.patch > > > In YARN-3663 we added the SQL scripts for SQLServer. We want to add the MySql > scripts to be able to run Federation with a MySQL servers which will be less > performant but convenient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6870) ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a float, which is imprecise
[ https://issues.apache.org/jira/browse/YARN-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105798#comment-16105798 ] Brook Zhou commented on YARN-6870: -- Findbugs warnings are from existing code not in this patch. > ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a > float, which is imprecise > --- > > Key: YARN-6870 > URL: https://issues.apache.org/jira/browse/YARN-6870 > Project: Hadoop YARN > Issue Type: Bug > Components: api, nodemanager >Reporter: Brook Zhou >Assignee: Brook Zhou > Attachments: YARN-6870-v0.patch, YARN-6870-v1.patch, > YARN-6870-v2.patch, YARN-6870-v3.patch > > > We have seen issues on our clusters where the current way of computing CPU > usage is having float-arithmetic inaccuracies (the bug is still there in > trunk) > Simple program to illustrate: > {code:title=Bar.java|borderStyle=solid} > public static void main(String[] args) throws Exception { > float result = 0.0f; > for (int i = 0; i < 7; i++) { > if (i == 6) { > result += (float) 4 / (float)18; > } else { > result += (float) 2 / (float)18; > } > } > for (int i = 0; i < 7; i++) { > if (i == 6) { > result -= (float) 4 / (float)18; > } else { > result -= (float) 2 / (float)18; > } > } > System.out.println(result); > } > {code} > // Printed > 4.4703484E-8 > 2017-04-12 05:43:24,014 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: > Not enough cpu for [container_e3295_1491978508342_0467_01_30], Current > CPU Allocation: [0.891], Requested CPU Allocation: [0.] > There are a few places with this issue: > 1. ResourceUtilization.java - set/getCPU both use float. When > ContainerScheduler calls > ContainersMonitor.increase/decreaseResourceUtilization, this may lead to > issues. > 2. AllocationBasedResourceUtilizationTracker.java - hasResourcesAvailable > uses float as well for CPU computation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object
[ https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105796#comment-16105796 ] Konstantinos Karanasos commented on YARN-6593: -- Sounds good, thanks [~leftnoteasy] and [~chris.douglas] for the reviews and the feedback! > [API] Introduce Placement Constraint object > --- > > Key: YARN-6593 > URL: https://issues.apache.org/jira/browse/YARN-6593 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > Attachments: YARN-6593.001.patch, YARN-6593.002.patch, > YARN-6593.003.patch, YARN-6593.004.patch, YARN-6593.005.patch, > YARN-6593.006.patch, YARN-6593.007.patch, YARN-6593.008.patch > > > Just removed Fixed version and moved it to target version as we set fix > version only after patch is committed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6870) ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a float, which is imprecise
[ https://issues.apache.org/jira/browse/YARN-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105795#comment-16105795 ] Hadoop QA commented on YARN-6870: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 43s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 5 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 43s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 34m 0s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6870 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12879438/YARN-6870-v3.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 5bf022fc80e6 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 713349a | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/16602/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/16602/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/16602/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a > float, which is imprecise > --- > > Key: YARN-6870 > URL:
[jira] [Commented] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM
[ https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105786#comment-16105786 ] Hadoop QA commented on YARN-6130: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 13 new or modified test files. {color} | || || || || {color:brown} YARN-5355 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 35s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 30s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 52s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 8s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 18s{color} | {color:green} YARN-5355 passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 17s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in YARN-5355 has 2 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 59s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in YARN-5355 has 5 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 21s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager in YARN-5355 has 8 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 49s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client in YARN-5355 has 2 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 50s{color} | {color:red} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app in YARN-5355 has 3 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 49s{color} | {color:green} YARN-5355 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 6s{color} | {color:green} root: The patch generated 0 new + 445 unchanged - 2 fixed = 445 total (was 447) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 39s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} |
[jira] [Updated] (YARN-5514) Clarify DecommissionType.FORCEFUL comment
[ https://issues.apache.org/jira/browse/YARN-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-5514: -- Issue Type: Sub-task (was: Bug) Parent: YARN-914 > Clarify DecommissionType.FORCEFUL comment > - > > Key: YARN-5514 > URL: https://issues.apache.org/jira/browse/YARN-5514 > Project: Hadoop YARN > Issue Type: Sub-task > Components: documentation >Affects Versions: 2.8.0 >Reporter: Robert Kanter >Assignee: Vrushali C >Priority: Minor > Labels: newbie > Fix For: 2.9.0, 3.0.0-alpha1 > > Attachments: YARN-5514.001.patch, YARN-5514.002.patch > > > The comment for > {{org.apache.hadoop.yarn.api.records.DecommissionType.FORCEFUL}} is a little > unclear. It says: > {code} > /** Forceful decommissioning of nodes which are already in progress **/ > {code} > It's not exactly clear of what the nodes are in progress. It should say > something like "Forceful decommissioning of nodes which are already in > progress of decommissioning". -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6897) Refactoring RMWebServices by moving some util methods in RMWebAppUtil
[ https://issues.apache.org/jira/browse/YARN-6897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105771#comment-16105771 ] Giovanni Matteo Fumarola commented on YARN-6897: I did not add junit tests since it is a simple refactoring. The failed test is not related. > Refactoring RMWebServices by moving some util methods in RMWebAppUtil > - > > Key: YARN-6897 > URL: https://issues.apache.org/jira/browse/YARN-6897 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6897.v1.patch, YARN-6897.v2.patch > > > In YARN-6896 the router needs to use some methods already implemented in > {{RMWebServices}}. This jira continues the work done in YARN-6634. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6872) Ensure apps could run given NodeLabels are disabled post RM switchover/restart
[ https://issues.apache.org/jira/browse/YARN-6872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105743#comment-16105743 ] Jian He commented on YARN-6872: --- isRecovery flag is already passed into SchedulerUtils#normalizeAndValidateRequest, I think we can use that flag directly ? And this block of code can be removed now ? {code} // If null amReq has been returned, check if it is the case that // application has specified node label expression while node label // has been disabled. Reject the recovery of this application if it // is true and give clear message so that user can react properly. if (!appContext.getUnmanagedAM() && (application.getAMResourceRequests() == null || application.getAMResourceRequests().isEmpty()) && !YarnConfiguration.areNodeLabelsEnabled(this.conf)) { // check application submission context and see if am resource request // or application itself contains any node label expression. List amReqsFromAppContext = appContext.getAMContainerResourceRequests(); String labelExp = (amReqsFromAppContext != null && !amReqsFromAppContext.isEmpty()) ? amReqsFromAppContext.get(0).getNodeLabelExpression() : null; if (labelExp == null) { labelExp = appContext.getNodeLabelExpression(); } if (labelExp != null && !labelExp.equals(RMNodeLabelsManager.NO_LABEL)) { String message = "Application recovered " + appId + ". NodeLabel is not enabled in cluster, but AM resource request " + "contains a label expression. Consider for NO_LABEL."; LOG.warn(message); } } {code} Did you verify that the labeled resource will be counted as non-labeled resource after RM restart with node label disabled? > Ensure apps could run given NodeLabels are disabled post RM switchover/restart > -- > > Key: YARN-6872 > URL: https://issues.apache.org/jira/browse/YARN-6872 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-6872.001.patch > > > Post YARN-6031, few apps could be failed during recovery provided they had > some label requirements for AM and labels were disable post RM > restart/switchover. As discussed in YARN-6031, its better to run such apps as > it may be long running apps as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6897) Refactoring RMWebServices by moving some util methods in RMWebAppUtil
[ https://issues.apache.org/jira/browse/YARN-6897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105741#comment-16105741 ] Hadoop QA commented on YARN-6897: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 11 unchanged - 1 fixed = 11 total (was 12) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 0 new + 352 unchanged - 3 fixed = 352 total (was 355) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 38s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 69m 29s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6897 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12879421/YARN-6897.v2.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux e4123273c1b2 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 746189a | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/16599/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/16599/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output |
[jira] [Commented] (YARN-6726) Fix issues with docker commands executed by container-executor
[ https://issues.apache.org/jira/browse/YARN-6726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105732#comment-16105732 ] Wangda Tan commented on YARN-6726: -- Thanks [~shaneku...@gmail.com], my only concern is the new added regex.h may cause some portability issues. I would like to hear some expert opinions before proceed. +[~sunilg]/[~chris.douglas]. And could you help to do some manual tests to make sure this patch works, since existing unit test is not capable to show if it works end to end or not. > Fix issues with docker commands executed by container-executor > -- > > Key: YARN-6726 > URL: https://issues.apache.org/jira/browse/YARN-6726 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Reporter: Shane Kumpf >Assignee: Shane Kumpf > Attachments: YARN-6726.001.patch, YARN-6726.002.patch > > > docker inspect, rm, stop, etc are issued through container-executor. Commands > other than docker run are not functioning properly. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6870) ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a float, which is imprecise
[ https://issues.apache.org/jira/browse/YARN-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brook Zhou updated YARN-6870: - Attachment: YARN-6870-v3.patch > ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a > float, which is imprecise > --- > > Key: YARN-6870 > URL: https://issues.apache.org/jira/browse/YARN-6870 > Project: Hadoop YARN > Issue Type: Bug > Components: api, nodemanager >Reporter: Brook Zhou >Assignee: Brook Zhou > Attachments: YARN-6870-v0.patch, YARN-6870-v1.patch, > YARN-6870-v2.patch, YARN-6870-v3.patch > > > We have seen issues on our clusters where the current way of computing CPU > usage is having float-arithmetic inaccuracies (the bug is still there in > trunk) > Simple program to illustrate: > {code:title=Bar.java|borderStyle=solid} > public static void main(String[] args) throws Exception { > float result = 0.0f; > for (int i = 0; i < 7; i++) { > if (i == 6) { > result += (float) 4 / (float)18; > } else { > result += (float) 2 / (float)18; > } > } > for (int i = 0; i < 7; i++) { > if (i == 6) { > result -= (float) 4 / (float)18; > } else { > result -= (float) 2 / (float)18; > } > } > System.out.println(result); > } > {code} > // Printed > 4.4703484E-8 > 2017-04-12 05:43:24,014 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: > Not enough cpu for [container_e3295_1491978508342_0467_01_30], Current > CPU Allocation: [0.891], Requested CPU Allocation: [0.] > There are a few places with this issue: > 1. ResourceUtilization.java - set/getCPU both use float. When > ContainerScheduler calls > ContainersMonitor.increase/decreaseResourceUtilization, this may lead to > issues. > 2. AllocationBasedResourceUtilizationTracker.java - hasResourcesAvailable > uses float as well for CPU computation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6853) Add MySql Scripts for FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105719#comment-16105719 ] Hadoop QA commented on YARN-6853: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} YARN-2915 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 59s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 9s{color} | {color:green} YARN-2915 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 21m 43s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6853 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12879428/YARN-6853-YARN-2915.v2.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 72675997dd50 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-2915 / 82ba2f2 | | modules | C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: hadoop-yarn-project/hadoop-yarn | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/16600/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add MySql Scripts for FederationStateStore > -- > > Key: YARN-6853 > URL: https://issues.apache.org/jira/browse/YARN-6853 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6853.v1.patch, YARN-6853-YARN-2915.v1.patch, > YARN-6853-YARN-2915.v2.patch > > > In YARN-3663 we added the SQL scripts for SQLServer. We want to add the MySql > scripts to be able to run Federation with a MySQL servers which will be less > performant but convenient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6757) Refactor the usage of yarn.nodemanager.linux-container-executor.cgroups.mount-path
[ https://issues.apache.org/jira/browse/YARN-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Szegedi updated YARN-6757: - Attachment: YARN-6757.003.patch > Refactor the usage of > yarn.nodemanager.linux-container-executor.cgroups.mount-path > -- > > Key: YARN-6757 > URL: https://issues.apache.org/jira/browse/YARN-6757 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 3.0.0-alpha4 >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi >Priority: Minor > Attachments: YARN-6757.000.patch, YARN-6757.001.patch, > YARN-6757.002.patch, YARN-6757.003.patch > > > We should add the ability to specify a custom cgroup path. This is how the > documentation of {{linux-container-executor.cgroups.mount-path}} would look > like: > {noformat} > Requested cgroup mount path. Yarn has built in functionality to discover > the system cgroup mount paths, so use this setting only, if the discovery > does not work. > This path must exist before the NodeManager is launched. > The location can vary depending on the Linux distribution in use. > Common locations include /sys/fs/cgroup and /cgroup. > If cgroups are not mounted, set > yarn.nodemanager.linux-container-executor.cgroups.mount > to true. In this case it specifies, where the LCE should attempt to mount > cgroups if not found. > If cgroups is accessible through lxcfs or some other file system, > then set this path and > yarn.nodemanager.linux-container-executor.cgroups.mount to false. > Yarn tries to use this path first, before any cgroup mount point > discovery. > If it cannot find this directory, it falls back to searching for cgroup > mount points in the system. > Only used when the LCE resources handler is set to the > CgroupsLCEResourcesHandler > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5197) RM leaks containers if running container disappears from node update
[ https://issues.apache.org/jira/browse/YARN-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-5197: -- Priority: Critical (was: Major) > RM leaks containers if running container disappears from node update > > > Key: YARN-5197 > URL: https://issues.apache.org/jira/browse/YARN-5197 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.7.2, 2.6.4 >Reporter: Jason Lowe >Assignee: Jason Lowe >Priority: Critical > Fix For: 2.8.0, 2.6.5, 2.7.4 > > Attachments: YARN-5197.001.patch, YARN-5197.002.patch, > YARN-5197.003.patch, YARN-5197-branch-2.7.003.patch, > YARN-5197-branch-2.8.003.patch > > > Once a node reports a container running in a status update, the corresponding > RMNodeImpl will track the container in its launchedContainers map. If the > node somehow misses sending the completed container status to the RM and the > container simply disappears from subsequent heartbeats, the container will > leak in launchedContainers forever and the container completion event will > not be sent to the scheduler. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object
[ https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105689#comment-16105689 ] Wangda Tan commented on YARN-6593: -- Latest patch looks good, +1, I will commit it next Tue (give a couple of days for people to review). Thanks [~kkaranasos] > [API] Introduce Placement Constraint object > --- > > Key: YARN-6593 > URL: https://issues.apache.org/jira/browse/YARN-6593 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > Attachments: YARN-6593.001.patch, YARN-6593.002.patch, > YARN-6593.003.patch, YARN-6593.004.patch, YARN-6593.005.patch, > YARN-6593.006.patch, YARN-6593.007.patch, YARN-6593.008.patch > > > Just removed Fixed version and moved it to target version as we set fix > version only after patch is committed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6757) Refactor the usage of yarn.nodemanager.linux-container-executor.cgroups.mount-path
[ https://issues.apache.org/jira/browse/YARN-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105675#comment-16105675 ] Hadoop QA commented on YARN-6757: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 54s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 5 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 0 new + 8 unchanged - 15 fixed = 8 total (was 23) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 27s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 11s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 68m 8s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6757 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12879415/YARN-6757.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall
[jira] [Updated] (YARN-6853) Add MySql Scripts for FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola updated YARN-6853: --- Parent Issue: YARN-2915 (was: YARN-5597) > Add MySql Scripts for FederationStateStore > -- > > Key: YARN-6853 > URL: https://issues.apache.org/jira/browse/YARN-6853 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6853.v1.patch, YARN-6853-YARN-2915.v1.patch, > YARN-6853-YARN-2915.v2.patch > > > In YARN-3663 we added the SQL scripts for SQLServer. We want to add the MySql > scripts to be able to run Federation with a MySQL servers which will be less > performant but convenient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6853) Add MySql Scripts for FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105674#comment-16105674 ] Giovanni Matteo Fumarola commented on YARN-6853: Fixed the whitespaces in v2. > Add MySql Scripts for FederationStateStore > -- > > Key: YARN-6853 > URL: https://issues.apache.org/jira/browse/YARN-6853 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6853.v1.patch, YARN-6853-YARN-2915.v1.patch, > YARN-6853-YARN-2915.v2.patch > > > In YARN-3663 we added the SQL scripts for SQLServer. We want to add the MySql > scripts to be able to run Federation with a MySQL servers which will be less > performant but convenient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6853) Add MySql Scripts for FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola updated YARN-6853: --- Attachment: YARN-6853-YARN-2915.v2.patch > Add MySql Scripts for FederationStateStore > -- > > Key: YARN-6853 > URL: https://issues.apache.org/jira/browse/YARN-6853 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6853.v1.patch, YARN-6853-YARN-2915.v1.patch, > YARN-6853-YARN-2915.v2.patch > > > In YARN-3663 we added the SQL scripts for SQLServer. We want to add the MySql > scripts to be able to run Federation with a MySQL servers which will be less > performant but convenient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6853) Add MySql Scripts for FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105644#comment-16105644 ] Hadoop QA commented on YARN-6853: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} YARN-2915 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 20s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 46s{color} | {color:green} YARN-2915 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 7s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 57s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6853 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12879418/YARN-6853-YARN-2915.v1.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 733d9f5108d4 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-2915 / 82ba2f2 | | whitespace | https://builds.apache.org/job/PreCommit-YARN-Build/16598/artifact/patchprocess/whitespace-eol.txt | | modules | C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: hadoop-yarn-project/hadoop-yarn | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/16598/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add MySql Scripts for FederationStateStore > -- > > Key: YARN-6853 > URL: https://issues.apache.org/jira/browse/YARN-6853 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6853.v1.patch, YARN-6853-YARN-2915.v1.patch > > > In YARN-3663 we added the SQL scripts for SQLServer. We want to add the MySql > scripts to be able to run Federation with a MySQL servers which will be less > performant but convenient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6895) Preemption reservation may cause regular reservation leaks
[ https://issues.apache.org/jira/browse/YARN-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105632#comment-16105632 ] Miklos Szegedi commented on YARN-6895: -- I verified and TestSubmitApplicationWithRMHA fails without the patch as well. I could not repro the TestFSAppStarvation issue. > Preemption reservation may cause regular reservation leaks > -- > > Key: YARN-6895 > URL: https://issues.apache.org/jira/browse/YARN-6895 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 3.0.0-alpha4 >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi >Priority: Blocker > Attachments: YARN-6895.000.patch > > > We found a limitation in the implementation of YARN-6432. If the container > released is smaller than the preemption request, a node reservation is > created that is never deleted. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6897) Refactoring RMWebServices by moving some util methods in RMWebAppUtil
[ https://issues.apache.org/jira/browse/YARN-6897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105626#comment-16105626 ] Giovanni Matteo Fumarola commented on YARN-6897: Fixed the Yetus warnings in V2. I run the test in my box and they run successfully with and without my patch. > Refactoring RMWebServices by moving some util methods in RMWebAppUtil > - > > Key: YARN-6897 > URL: https://issues.apache.org/jira/browse/YARN-6897 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6897.v1.patch, YARN-6897.v2.patch > > > In YARN-6896 the router needs to use some methods already implemented in > {{RMWebServices}}. This jira continues the work done in YARN-6634. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6897) Refactoring RMWebServices by moving some util methods in RMWebAppUtil
[ https://issues.apache.org/jira/browse/YARN-6897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola updated YARN-6897: --- Attachment: YARN-6897.v2.patch > Refactoring RMWebServices by moving some util methods in RMWebAppUtil > - > > Key: YARN-6897 > URL: https://issues.apache.org/jira/browse/YARN-6897 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6897.v1.patch, YARN-6897.v2.patch > > > In YARN-6896 the router needs to use some methods already implemented in > {{RMWebServices}}. This jira continues the work done in YARN-6634. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5464) Server-Side NM Graceful Decommissioning with RM HA
[ https://issues.apache.org/jira/browse/YARN-5464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105618#comment-16105618 ] Junping Du commented on YARN-5464: -- [~rkanter], do you have bandwidth on this in short term? If not, I will take on it. > Server-Side NM Graceful Decommissioning with RM HA > -- > > Key: YARN-5464 > URL: https://issues.apache.org/jira/browse/YARN-5464 > Project: Hadoop YARN > Issue Type: Sub-task > Components: graceful >Reporter: Robert Kanter >Priority: Blocker > Attachments: YARN-5464.wip.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5464) Server-Side NM Graceful Decommissioning with RM HA
[ https://issues.apache.org/jira/browse/YARN-5464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105617#comment-16105617 ] Andrew Wang commented on YARN-5464: --- Ping, is someone going to take this? Beta1 is fast approaching. > Server-Side NM Graceful Decommissioning with RM HA > -- > > Key: YARN-5464 > URL: https://issues.apache.org/jira/browse/YARN-5464 > Project: Hadoop YARN > Issue Type: Sub-task > Components: graceful >Reporter: Robert Kanter >Priority: Blocker > Attachments: YARN-5464.wip.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6853) Add MySql Scripts for FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105607#comment-16105607 ] Giovanni Matteo Fumarola commented on YARN-6853: Attached the new patch with the correct branch. > Add MySql Scripts for FederationStateStore > -- > > Key: YARN-6853 > URL: https://issues.apache.org/jira/browse/YARN-6853 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6853.v1.patch, YARN-6853-YARN-2915.v1.patch > > > In YARN-3663 we added the SQL scripts for SQLServer. We want to add the MySql > scripts to be able to run Federation with a MySQL servers which will be less > performant but convenient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6853) Add MySql Scripts for FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola updated YARN-6853: --- Attachment: YARN-6853-YARN-2915.v1.patch > Add MySql Scripts for FederationStateStore > -- > > Key: YARN-6853 > URL: https://issues.apache.org/jira/browse/YARN-6853 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6853.v1.patch, YARN-6853-YARN-2915.v1.patch > > > In YARN-3663 we added the SQL scripts for SQLServer. We want to add the MySql > scripts to be able to run Federation with a MySQL servers which will be less > performant but convenient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6757) Refactor the usage of yarn.nodemanager.linux-container-executor.cgroups.mount-path
[ https://issues.apache.org/jira/browse/YARN-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105581#comment-16105581 ] Miklos Szegedi commented on YARN-6757: -- Thanks. I updated the documentation including the html. I think that is a better way to describe the details than the xml which is not visible for everyone. I left a pointer in the xml. I did not cache {{getValidCGroups()}}, since it is just used one. Caching it would waste memory, although not much. > Refactor the usage of > yarn.nodemanager.linux-container-executor.cgroups.mount-path > -- > > Key: YARN-6757 > URL: https://issues.apache.org/jira/browse/YARN-6757 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 3.0.0-alpha4 >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi >Priority: Minor > Attachments: YARN-6757.000.patch, YARN-6757.001.patch, > YARN-6757.002.patch > > > We should add the ability to specify a custom cgroup path. This is how the > documentation of {{linux-container-executor.cgroups.mount-path}} would look > like: > {noformat} > Requested cgroup mount path. Yarn has built in functionality to discover > the system cgroup mount paths, so use this setting only, if the discovery > does not work. > This path must exist before the NodeManager is launched. > The location can vary depending on the Linux distribution in use. > Common locations include /sys/fs/cgroup and /cgroup. > If cgroups are not mounted, set > yarn.nodemanager.linux-container-executor.cgroups.mount > to true. In this case it specifies, where the LCE should attempt to mount > cgroups if not found. > If cgroups is accessible through lxcfs or some other file system, > then set this path and > yarn.nodemanager.linux-container-executor.cgroups.mount to false. > Yarn tries to use this path first, before any cgroup mount point > discovery. > If it cannot find this directory, it falls back to searching for cgroup > mount points in the system. > Only used when the LCE resources handler is set to the > CgroupsLCEResourcesHandler > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6757) Refactor the usage of yarn.nodemanager.linux-container-executor.cgroups.mount-path
[ https://issues.apache.org/jira/browse/YARN-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Szegedi updated YARN-6757: - Attachment: YARN-6757.002.patch > Refactor the usage of > yarn.nodemanager.linux-container-executor.cgroups.mount-path > -- > > Key: YARN-6757 > URL: https://issues.apache.org/jira/browse/YARN-6757 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 3.0.0-alpha4 >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi >Priority: Minor > Attachments: YARN-6757.000.patch, YARN-6757.001.patch, > YARN-6757.002.patch > > > We should add the ability to specify a custom cgroup path. This is how the > documentation of {{linux-container-executor.cgroups.mount-path}} would look > like: > {noformat} > Requested cgroup mount path. Yarn has built in functionality to discover > the system cgroup mount paths, so use this setting only, if the discovery > does not work. > This path must exist before the NodeManager is launched. > The location can vary depending on the Linux distribution in use. > Common locations include /sys/fs/cgroup and /cgroup. > If cgroups are not mounted, set > yarn.nodemanager.linux-container-executor.cgroups.mount > to true. In this case it specifies, where the LCE should attempt to mount > cgroups if not found. > If cgroups is accessible through lxcfs or some other file system, > then set this path and > yarn.nodemanager.linux-container-executor.cgroups.mount to false. > Yarn tries to use this path first, before any cgroup mount point > discovery. > If it cannot find this directory, it falls back to searching for cgroup > mount points in the system. > Only used when the LCE resources handler is set to the > CgroupsLCEResourcesHandler > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6853) Add MySql Scripts for FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105569#comment-16105569 ] Hadoop QA commented on YARN-6853: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} YARN-6853 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | YARN-6853 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12879411/YARN-6853.v1.patch | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/16596/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add MySql Scripts for FederationStateStore > -- > > Key: YARN-6853 > URL: https://issues.apache.org/jira/browse/YARN-6853 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6853.v1.patch > > > In YARN-3663 we added the SQL scripts for SQLServer. We want to add the MySql > scripts to be able to run Federation with a MySQL servers which will be less > performant but convenient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6853) Add MySql Scripts for FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola updated YARN-6853: --- Attachment: YARN-6853.v1.patch > Add MySql Scripts for FederationStateStore > -- > > Key: YARN-6853 > URL: https://issues.apache.org/jira/browse/YARN-6853 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6853.v1.patch > > > In YARN-3663 we added the SQL scripts for SQLServer. We want to add the MySql > scripts to be able to run Federation with a MySQL servers which will be less > performant but convenient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6853) Add MySql Scripts for FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola updated YARN-6853: --- Attachment: (was: YARN-6853.proto.patch) > Add MySql Scripts for FederationStateStore > -- > > Key: YARN-6853 > URL: https://issues.apache.org/jira/browse/YARN-6853 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6853.v1.patch > > > In YARN-3663 we added the SQL scripts for SQLServer. We want to add the MySql > scripts to be able to run Federation with a MySQL servers which will be less > performant but convenient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6895) Preemption reservation may cause regular reservation leaks
[ https://issues.apache.org/jira/browse/YARN-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105536#comment-16105536 ] Hadoop QA commented on YARN-6895: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 39 unchanged - 4 fixed = 39 total (was 43) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 54s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 67m 49s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation | | Timed out junit tests | org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6895 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12879397/YARN-6895.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 5de1815a58cb 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 77791e4 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/16594/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/16594/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/16594/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Preemption
[jira] [Commented] (YARN-2919) Potential race between renew and cancel in DelegationTokenRenwer
[ https://issues.apache.org/jira/browse/YARN-2919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105521#comment-16105521 ] Naganarasimha G R commented on YARN-2919: - Hi [~jianhe] & [~djp], If you guys still feel its better to keep it as is, then will close this jira > Potential race between renew and cancel in DelegationTokenRenwer > - > > Key: YARN-2919 > URL: https://issues.apache.org/jira/browse/YARN-2919 > Project: Hadoop YARN > Issue Type: Bug > Components: security >Affects Versions: 2.6.0 >Reporter: Karthik Kambatla >Assignee: Naganarasimha G R >Priority: Critical > Attachments: YARN-2919.002.patch, YARN-2919.003.patch, > YARN-2919.004.patch, YARN-2919.005.patch, YARN-2919.20141209-1.patch > > > YARN-2874 fixes a deadlock in DelegationTokenRenewer, but there is still a > race because of which a renewal in flight isn't interrupted by a cancel. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6900) ZooKeeper based implementation of the FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated YARN-6900: - Parent Issue: YARN-5597 (was: YARN-2915) > ZooKeeper based implementation of the FederationStateStore > -- > > Key: YARN-6900 > URL: https://issues.apache.org/jira/browse/YARN-6900 > Project: Hadoop YARN > Issue Type: Sub-task > Components: federation, nodemanager, resourcemanager >Reporter: Subru Krishnan >Assignee: Inigo Goiri > > YARN-5408 defines the unified {{FederationStateStore}} API. Currently we only > support SQL based stores, this JIRA tracks adding a ZooKeeper based > implementation for simplifying deployment as it's already popularly used for > {{RMStateStore}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6788) Improve performance of resource profile branch
[ https://issues.apache.org/jira/browse/YARN-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105499#comment-16105499 ] Hadoop QA commented on YARN-6788: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} YARN-3926 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 50s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 4s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 6s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 8s{color} | {color:green} YARN-3926 passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 24s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in YARN-3926 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 6s{color} | {color:green} YARN-3926 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 29s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 54s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 15 new + 194 unchanged - 16 fixed = 209 total (was 210) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 14s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 41s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 25s{color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 40s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 45s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 37s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The
[jira] [Updated] (YARN-6900) ZooKeeper based implementation of the FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated YARN-6900: - Description: YARN-5408 defines the unified {{FederationStateStore}} API. Currently we only support SQL based stores, this JIRA tracks adding a ZooKeeper based implementation for simplifying deployment as it's already popularly used for {{RMStateStore}}. (was: YARN-5408 defines the unified {{FederationStateStore}} API. This JIRA tracks an ZooKeeper based implementation as currently we only support SQL.) > ZooKeeper based implementation of the FederationStateStore > -- > > Key: YARN-6900 > URL: https://issues.apache.org/jira/browse/YARN-6900 > Project: Hadoop YARN > Issue Type: Sub-task > Components: federation, nodemanager, resourcemanager >Reporter: Subru Krishnan >Assignee: Inigo Goiri > > YARN-5408 defines the unified {{FederationStateStore}} API. Currently we only > support SQL based stores, this JIRA tracks adding a ZooKeeper based > implementation for simplifying deployment as it's already popularly used for > {{RMStateStore}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6900) ZooKeeper based implementation of the FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated YARN-6900: - Target Version/s: 3.0.0-beta1 (was: YARN-2915) > ZooKeeper based implementation of the FederationStateStore > -- > > Key: YARN-6900 > URL: https://issues.apache.org/jira/browse/YARN-6900 > Project: Hadoop YARN > Issue Type: Sub-task > Components: federation, nodemanager, resourcemanager >Reporter: Subru Krishnan >Assignee: Inigo Goiri > > YARN-5408 defines the unified {{FederationStateStore}} API. This JIRA tracks > an ZooKeeper based implementation as currently we only support SQL. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6900) ZooKeeper based implementation of the FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated YARN-6900: - Component/s: federation > ZooKeeper based implementation of the FederationStateStore > -- > > Key: YARN-6900 > URL: https://issues.apache.org/jira/browse/YARN-6900 > Project: Hadoop YARN > Issue Type: Sub-task > Components: federation, nodemanager, resourcemanager >Reporter: Subru Krishnan >Assignee: Inigo Goiri > > YARN-5408 defines the unified {{FederationStateStore}} API. This JIRA tracks > an ZooKeeper based implementation as currently we only support SQL. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6900) ZooKeeper based implementation of the FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated YARN-6900: - Description: YARN-5408 defines the unified {{FederationStateStore}} API. This JIRA tracks an ZooKeeper based implementation as currently we only support SQL. (was: YARN-3662 defines the FederationMembershipStateStore API. This JIRA tracks an in-memory based implementation which is useful for both single-box testing and for future unit tests that depend on the state store.) > ZooKeeper based implementation of the FederationStateStore > -- > > Key: YARN-6900 > URL: https://issues.apache.org/jira/browse/YARN-6900 > Project: Hadoop YARN > Issue Type: Sub-task > Components: federation, nodemanager, resourcemanager >Reporter: Subru Krishnan >Assignee: Inigo Goiri > > YARN-5408 defines the unified {{FederationStateStore}} API. This JIRA tracks > an ZooKeeper based implementation as currently we only support SQL. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6900) ZooKeeper based implementation of the FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated YARN-6900: - Hadoop Flags: (was: Reviewed) > ZooKeeper based implementation of the FederationStateStore > -- > > Key: YARN-6900 > URL: https://issues.apache.org/jira/browse/YARN-6900 > Project: Hadoop YARN > Issue Type: Sub-task > Components: federation, nodemanager, resourcemanager >Reporter: Subru Krishnan >Assignee: Inigo Goiri > > YARN-5408 defines the unified {{FederationStateStore}} API. This JIRA tracks > an ZooKeeper based implementation as currently we only support SQL. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-6900) ZooKeeper based implementation of the FederationStateStore
[ https://issues.apache.org/jira/browse/YARN-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan reassigned YARN-6900: Assignee: Inigo Goiri (was: Ellen Hui) > ZooKeeper based implementation of the FederationStateStore > -- > > Key: YARN-6900 > URL: https://issues.apache.org/jira/browse/YARN-6900 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Subru Krishnan >Assignee: Inigo Goiri > > YARN-3662 defines the FederationMembershipStateStore API. This JIRA tracks an > in-memory based implementation which is useful for both single-box testing > and for future unit tests that depend on the state store. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6900) ZooKeeper based implementation of the FederationStateStore
Subru Krishnan created YARN-6900: Summary: ZooKeeper based implementation of the FederationStateStore Key: YARN-6900 URL: https://issues.apache.org/jira/browse/YARN-6900 Project: Hadoop YARN Issue Type: Sub-task Components: nodemanager, resourcemanager Reporter: Subru Krishnan Assignee: Ellen Hui YARN-3662 defines the FederationMembershipStateStore API. This JIRA tracks an in-memory based implementation which is useful for both single-box testing and for future unit tests that depend on the state store. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6897) Refactoring RMWebServices by moving some util methods in RMWebAppUtil
[ https://issues.apache.org/jira/browse/YARN-6897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated YARN-6897: - Parent Issue: YARN-2915 (was: YARN-5597) > Refactoring RMWebServices by moving some util methods in RMWebAppUtil > - > > Key: YARN-6897 > URL: https://issues.apache.org/jira/browse/YARN-6897 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6897.v1.patch > > > In YARN-6896 the router needs to use some methods already implemented in > {{RMWebServices}}. This jira continues the work done in YARN-6634. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6897) Refactoring RMWebServices by moving some util methods in RMWebAppUtil
[ https://issues.apache.org/jira/browse/YARN-6897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105479#comment-16105479 ] Subru Krishnan commented on YARN-6897: -- The patch is very straightforward. [~giovanni.fumarola], can you fix the Yetus warnings. > Refactoring RMWebServices by moving some util methods in RMWebAppUtil > - > > Key: YARN-6897 > URL: https://issues.apache.org/jira/browse/YARN-6897 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-6897.v1.patch > > > In YARN-6896 the router needs to use some methods already implemented in > {{RMWebServices}}. This jira continues the work done in YARN-6634. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6870) ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a float, which is imprecise
[ https://issues.apache.org/jira/browse/YARN-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105475#comment-16105475 ] Hadoop QA commented on YARN-6870: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 39s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 5 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 13s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 11s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 31m 50s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6870 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12879394/YARN-6870-v2.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 2d8f2c7a6fcc 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 77791e4 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/16593/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html | | javadoc | https://builds.apache.org/job/PreCommit-YARN-Build/16593/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/16593/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/16593/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > ResourceUtilization/ContainersMonitorImpl is
[jira] [Commented] (YARN-6898) RM node labels page should display total used resources of each label.
[ https://issues.apache.org/jira/browse/YARN-6898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105474#comment-16105474 ] Naganarasimha G R commented on YARN-6898: - [~sunilg], Yes we can show but its add on, as its already available in the scheduler web ui page so not a blocking issue as such. > RM node labels page should display total used resources of each label. > -- > > Key: YARN-6898 > URL: https://issues.apache.org/jira/browse/YARN-6898 > Project: Hadoop YARN > Issue Type: Improvement > Components: scheduler >Reporter: YunFan Zhou >Assignee: YunFan Zhou > > The RM node labels page only show *Label Name*、*Label Type*、*Num Of Active > NMs*、*Total Resource* > information of each node label, but there isn't any place for us to see the > total used resource of the node label. > The total used resource of the node label is very important, because we can > use it to check the overall load for this > label. We will implement it. Any suggestion? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM
[ https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena updated YARN-6130: --- Attachment: YARN-6130-YARN-5355.06.patch > [ATSv2 Security] Generate a delegation token for AM when app collector is > created and pass it to AM via NM and RM > - > > Key: YARN-6130 > URL: https://issues.apache.org/jira/browse/YARN-6130 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Varun Saxena >Assignee: Varun Saxena > Labels: yarn-5355-merge-blocker > Attachments: YARN-6130-YARN-5355.01.patch, > YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch, > YARN-6130-YARN-5355.04.patch, YARN-6130-YARN-5355.05.patch, > YARN-6130-YARN-5355.06.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM
[ https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena updated YARN-6130: --- Attachment: (was: YARN-6130-YARN-5355.06.patch) > [ATSv2 Security] Generate a delegation token for AM when app collector is > created and pass it to AM via NM and RM > - > > Key: YARN-6130 > URL: https://issues.apache.org/jira/browse/YARN-6130 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Varun Saxena >Assignee: Varun Saxena > Labels: yarn-5355-merge-blocker > Attachments: YARN-6130-YARN-5355.01.patch, > YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch, > YARN-6130-YARN-5355.04.patch, YARN-6130-YARN-5355.05.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6897) Refactoring RMWebServices by moving some util methods in RMWebAppUtil
[ https://issues.apache.org/jira/browse/YARN-6897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105455#comment-16105455 ] Hadoop QA commented on YARN-6897: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 22s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 11 unchanged - 1 fixed = 12 total (was 12) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 18s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 1 new + 352 unchanged - 3 fixed = 353 total (was 355) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 25s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 71m 55s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerNodeLabelUpdate | | Timed out junit tests | org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA | | | org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA | | | org.apache.hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6897 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12879379/YARN-6897.v1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux e769bd74a29f 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9ea01fd | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | checkstyle |
[jira] [Updated] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM
[ https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena updated YARN-6130: --- Attachment: YARN-6130-YARN-5355.06.patch > [ATSv2 Security] Generate a delegation token for AM when app collector is > created and pass it to AM via NM and RM > - > > Key: YARN-6130 > URL: https://issues.apache.org/jira/browse/YARN-6130 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Varun Saxena >Assignee: Varun Saxena > Labels: yarn-5355-merge-blocker > Attachments: YARN-6130-YARN-5355.01.patch, > YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch, > YARN-6130-YARN-5355.04.patch, YARN-6130-YARN-5355.05.patch, > YARN-6130-YARN-5355.06.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM
[ https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena updated YARN-6130: --- Attachment: (was: YARN-6130-YARN-5355.06.patch) > [ATSv2 Security] Generate a delegation token for AM when app collector is > created and pass it to AM via NM and RM > - > > Key: YARN-6130 > URL: https://issues.apache.org/jira/browse/YARN-6130 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Varun Saxena >Assignee: Varun Saxena > Labels: yarn-5355-merge-blocker > Attachments: YARN-6130-YARN-5355.01.patch, > YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch, > YARN-6130-YARN-5355.04.patch, YARN-6130-YARN-5355.05.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6895) Preemption reservation may cause regular reservation leaks
[ https://issues.apache.org/jira/browse/YARN-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Szegedi updated YARN-6895: - Attachment: YARN-6895.000.patch > Preemption reservation may cause regular reservation leaks > -- > > Key: YARN-6895 > URL: https://issues.apache.org/jira/browse/YARN-6895 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 3.0.0-alpha4 >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi >Priority: Blocker > Attachments: YARN-6895.000.patch > > > We found a limitation in the implementation of YARN-6432. If the container > released is smaller than the preemption request, a node reservation is > created that is never deleted. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6847) [ATSv2] NPE in RM while starting timeline collector on recovery after explicit failover
[ https://issues.apache.org/jira/browse/YARN-6847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105415#comment-16105415 ] Varun Saxena commented on YARN-6847: Have backported YARN-6102 to branches YARN-5355 and YARN-5355-branch-2 so closing this. > [ATSv2] NPE in RM while starting timeline collector on recovery after > explicit failover > --- > > Key: YARN-6847 > URL: https://issues.apache.org/jira/browse/YARN-6847 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Varun Saxena >Assignee: Varun Saxena > > {noformat} > 2017-07-20 03:20:50,742 ERROR [Thread-449] resourcemanager.ResourceManager > (ResourceManager.java:serviceStart(763)) - Failed to load/recover state > java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.startTimelineCollector(RMAppImpl.java:535) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:467) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recoverApplication(RMAppManager.java:336) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recover(RMAppManager.java:576) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:1419) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:758) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1178) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1218) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1214) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1214) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:319) > at > org.apache.hadoop.yarn.client.ProtocolHATestBase.explicitFailover(ProtocolHATestBase.java:205) > at > org.apache.hadoop.yarn.client.ProtocolHATestBase$1.run(ProtocolHATestBase.java:250) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Resolved] (YARN-6847) [ATSv2] NPE in RM while starting timeline collector on recovery after explicit failover
[ https://issues.apache.org/jira/browse/YARN-6847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena resolved YARN-6847. Resolution: Duplicate > [ATSv2] NPE in RM while starting timeline collector on recovery after > explicit failover > --- > > Key: YARN-6847 > URL: https://issues.apache.org/jira/browse/YARN-6847 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Varun Saxena >Assignee: Varun Saxena > > {noformat} > 2017-07-20 03:20:50,742 ERROR [Thread-449] resourcemanager.ResourceManager > (ResourceManager.java:serviceStart(763)) - Failed to load/recover state > java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.startTimelineCollector(RMAppImpl.java:535) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:467) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recoverApplication(RMAppManager.java:336) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recover(RMAppManager.java:576) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:1419) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:758) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1178) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1218) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1214) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1214) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:319) > at > org.apache.hadoop.yarn.client.ProtocolHATestBase.explicitFailover(ProtocolHATestBase.java:205) > at > org.apache.hadoop.yarn.client.ProtocolHATestBase$1.run(ProtocolHATestBase.java:250) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM
[ https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105408#comment-16105408 ] Hadoop QA commented on YARN-6130: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} YARN-6130 does not apply to YARN-5355. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | YARN-6130 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12879393/YARN-6130-YARN-5355.06.patch | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/16592/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > [ATSv2 Security] Generate a delegation token for AM when app collector is > created and pass it to AM via NM and RM > - > > Key: YARN-6130 > URL: https://issues.apache.org/jira/browse/YARN-6130 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Varun Saxena >Assignee: Varun Saxena > Labels: yarn-5355-merge-blocker > Attachments: YARN-6130-YARN-5355.01.patch, > YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch, > YARN-6130-YARN-5355.04.patch, YARN-6130-YARN-5355.05.patch, > YARN-6130-YARN-5355.06.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6870) ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a float, which is imprecise
[ https://issues.apache.org/jira/browse/YARN-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brook Zhou updated YARN-6870: - Attachment: YARN-6870-v2.patch Thanks, made the change. > ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a > float, which is imprecise > --- > > Key: YARN-6870 > URL: https://issues.apache.org/jira/browse/YARN-6870 > Project: Hadoop YARN > Issue Type: Bug > Components: api, nodemanager >Reporter: Brook Zhou >Assignee: Brook Zhou > Attachments: YARN-6870-v0.patch, YARN-6870-v1.patch, > YARN-6870-v2.patch > > > We have seen issues on our clusters where the current way of computing CPU > usage is having float-arithmetic inaccuracies (the bug is still there in > trunk) > Simple program to illustrate: > {code:title=Bar.java|borderStyle=solid} > public static void main(String[] args) throws Exception { > float result = 0.0f; > for (int i = 0; i < 7; i++) { > if (i == 6) { > result += (float) 4 / (float)18; > } else { > result += (float) 2 / (float)18; > } > } > for (int i = 0; i < 7; i++) { > if (i == 6) { > result -= (float) 4 / (float)18; > } else { > result -= (float) 2 / (float)18; > } > } > System.out.println(result); > } > {code} > // Printed > 4.4703484E-8 > 2017-04-12 05:43:24,014 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: > Not enough cpu for [container_e3295_1491978508342_0467_01_30], Current > CPU Allocation: [0.891], Requested CPU Allocation: [0.] > There are a few places with this issue: > 1. ResourceUtilization.java - set/getCPU both use float. When > ContainerScheduler calls > ContainersMonitor.increase/decreaseResourceUtilization, this may lead to > issues. > 2. AllocationBasedResourceUtilizationTracker.java - hasResourcesAvailable > uses float as well for CPU computation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM
[ https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena updated YARN-6130: --- Attachment: YARN-6130-YARN-5355.06.patch > [ATSv2 Security] Generate a delegation token for AM when app collector is > created and pass it to AM via NM and RM > - > > Key: YARN-6130 > URL: https://issues.apache.org/jira/browse/YARN-6130 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Varun Saxena >Assignee: Varun Saxena > Labels: yarn-5355-merge-blocker > Attachments: YARN-6130-YARN-5355.01.patch, > YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch, > YARN-6130-YARN-5355.04.patch, YARN-6130-YARN-5355.05.patch, > YARN-6130-YARN-5355.06.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6870) ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a float, which is imprecise
[ https://issues.apache.org/jira/browse/YARN-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105398#comment-16105398 ] Arun Suresh commented on YARN-6870: --- +1 pending the javadoc fix > ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a > float, which is imprecise > --- > > Key: YARN-6870 > URL: https://issues.apache.org/jira/browse/YARN-6870 > Project: Hadoop YARN > Issue Type: Bug > Components: api, nodemanager >Reporter: Brook Zhou >Assignee: Brook Zhou > Attachments: YARN-6870-v0.patch, YARN-6870-v1.patch > > > We have seen issues on our clusters where the current way of computing CPU > usage is having float-arithmetic inaccuracies (the bug is still there in > trunk) > Simple program to illustrate: > {code:title=Bar.java|borderStyle=solid} > public static void main(String[] args) throws Exception { > float result = 0.0f; > for (int i = 0; i < 7; i++) { > if (i == 6) { > result += (float) 4 / (float)18; > } else { > result += (float) 2 / (float)18; > } > } > for (int i = 0; i < 7; i++) { > if (i == 6) { > result -= (float) 4 / (float)18; > } else { > result -= (float) 2 / (float)18; > } > } > System.out.println(result); > } > {code} > // Printed > 4.4703484E-8 > 2017-04-12 05:43:24,014 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: > Not enough cpu for [container_e3295_1491978508342_0467_01_30], Current > CPU Allocation: [0.891], Requested CPU Allocation: [0.] > There are a few places with this issue: > 1. ResourceUtilization.java - set/getCPU both use float. When > ContainerScheduler calls > ContainersMonitor.increase/decreaseResourceUtilization, this may lead to > issues. > 2. AllocationBasedResourceUtilizationTracker.java - hasResourcesAvailable > uses float as well for CPU computation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6870) ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a float, which is imprecise
[ https://issues.apache.org/jira/browse/YARN-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105396#comment-16105396 ] Arun Suresh commented on YARN-6870: --- Thanks for the patch Brook, Looks like the javadoc issue is due to this: {noformat} [ERROR] /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/AllocationBasedResourceUtilizationTracker.java:145: error: malformed HTML [ERROR] * @return True if currentAllocation*totalCores + coresRequested <= {noformat} Just replace the * with an x > ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a > float, which is imprecise > --- > > Key: YARN-6870 > URL: https://issues.apache.org/jira/browse/YARN-6870 > Project: Hadoop YARN > Issue Type: Bug > Components: api, nodemanager >Reporter: Brook Zhou >Assignee: Brook Zhou > Attachments: YARN-6870-v0.patch, YARN-6870-v1.patch > > > We have seen issues on our clusters where the current way of computing CPU > usage is having float-arithmetic inaccuracies (the bug is still there in > trunk) > Simple program to illustrate: > {code:title=Bar.java|borderStyle=solid} > public static void main(String[] args) throws Exception { > float result = 0.0f; > for (int i = 0; i < 7; i++) { > if (i == 6) { > result += (float) 4 / (float)18; > } else { > result += (float) 2 / (float)18; > } > } > for (int i = 0; i < 7; i++) { > if (i == 6) { > result -= (float) 4 / (float)18; > } else { > result -= (float) 2 / (float)18; > } > } > System.out.println(result); > } > {code} > // Printed > 4.4703484E-8 > 2017-04-12 05:43:24,014 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: > Not enough cpu for [container_e3295_1491978508342_0467_01_30], Current > CPU Allocation: [0.891], Requested CPU Allocation: [0.] > There are a few places with this issue: > 1. ResourceUtilization.java - set/getCPU both use float. When > ContainerScheduler calls > ContainersMonitor.increase/decreaseResourceUtilization, this may lead to > issues. > 2. AllocationBasedResourceUtilizationTracker.java - hasResourcesAvailable > uses float as well for CPU computation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6899) Validate Placement Constraints
[ https://issues.apache.org/jira/browse/YARN-6899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantinos Karanasos updated YARN-6899: - Issue Type: Bug (was: Sub-task) Parent: (was: YARN-6592) > Validate Placement Constraints > -- > > Key: YARN-6899 > URL: https://issues.apache.org/jira/browse/YARN-6899 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > > This JIRA introduces a validator for {{PlacementConstraint}} objects. > For example, a composite constraint always has to have children constraints, > the max cardinality of a cardinality constraint cannot be smaller than the > min cardinality, etc. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Resolved] (YARN-6899) Validate Placement Constraints
[ https://issues.apache.org/jira/browse/YARN-6899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantinos Karanasos resolved YARN-6899. -- Resolution: Duplicate Duplicate of YARN-6621. > Validate Placement Constraints > -- > > Key: YARN-6899 > URL: https://issues.apache.org/jira/browse/YARN-6899 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > > This JIRA introduces a validator for {{PlacementConstraint}} objects. > For example, a composite constraint always has to have children constraints, > the max cardinality of a cardinality constraint cannot be smaller than the > min cardinality, etc. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-6621) Validate Placement Constraints
[ https://issues.apache.org/jira/browse/YARN-6621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantinos Karanasos reassigned YARN-6621: Assignee: Konstantinos Karanasos > Validate Placement Constraints > -- > > Key: YARN-6621 > URL: https://issues.apache.org/jira/browse/YARN-6621 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > > This library will be used to validate placement constraints. > It can serve multiple validation purposes: > 1) Check if the placement constraint has a valid form (e.g., a cardinality > constraint should not have an associated target expression, a DELAYED_OR > compound expression should only appear in specific places in a constraint > tree, etc.) > 2) Check if the constraints given by a user are conflicting (e.g., > cardinality more than 5 in a host and less than 3 in a rack). > 3) Check that the constraints are properly added in the Placement Constraint > Manager. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6621) Validate Placement Constraints
[ https://issues.apache.org/jira/browse/YARN-6621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantinos Karanasos updated YARN-6621: - Summary: Validate Placement Constraints (was: Validator for Placement Constraints) > Validate Placement Constraints > -- > > Key: YARN-6621 > URL: https://issues.apache.org/jira/browse/YARN-6621 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos > > This library will be used to validate placement constraints. > It can serve multiple validation purposes: > 1) Check if the placement constraint has a valid form (e.g., a cardinality > constraint should not have an associated target expression, a DELAYED_OR > compound expression should only appear in specific places in a constraint > tree, etc.) > 2) Check if the constraints given by a user are conflicting (e.g., > cardinality more than 5 in a host and less than 3 in a rack). > 3) Check that the constraints are properly added in the Placement Constraint > Manager. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6870) ResourceUtilization/ContainersMonitorImpl is calculating CPU utilization as a float, which is imprecise
[ https://issues.apache.org/jira/browse/YARN-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16105368#comment-16105368 ] Hadoop QA commented on YARN-6870: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 43s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 5 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 6s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 33m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6870 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12879383/YARN-6870-v1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux cb46ad6eb560 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9ea01fd | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/16590/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html | | javadoc | https://builds.apache.org/job/PreCommit-YARN-Build/16590/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/16590/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/16590/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > ResourceUtilization/ContainersMonitorImpl is
[jira] [Created] (YARN-6899) Validate Placement Constraints
Konstantinos Karanasos created YARN-6899: Summary: Validate Placement Constraints Key: YARN-6899 URL: https://issues.apache.org/jira/browse/YARN-6899 Project: Hadoop YARN Issue Type: Sub-task Reporter: Konstantinos Karanasos Assignee: Konstantinos Karanasos This JIRA introduces a validator for {{PlacementConstraint}} objects. For example, a composite constraint always has to have children constraints, the max cardinality of a cardinality constraint cannot be smaller than the min cardinality, etc. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org