[jira] [Commented] (MAPREDUCE-6541) Exclude scheduled reducer memory when calculating available mapper slots from headroom to avoid deadlock
[ https://issues.apache.org/jira/browse/MAPREDUCE-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611807#comment-15611807 ] Hudson commented on MAPREDUCE-6541: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10703 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10703/]) MAPREDUCE-6541. Exclude scheduled reducer memory when calculating (naganarasimha_gr: rev 060558c6f221ded0b014189d5b82eee4cc7b576b) * (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestRMContainerAllocator.java * (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java > Exclude scheduled reducer memory when calculating available mapper slots from > headroom to avoid deadlock > - > > Key: MAPREDUCE-6541 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6541 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Wangda Tan >Assignee: Varun Saxena > Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2 > > Attachments: MAPREDUCE-6541.01.patch, MAPREDUCE-6541.02.patch > > > We saw a MR deadlock recently: > - When NM restarted by framework without enable recovery, containers running > on these nodes will be identified as "ABORTED", and MR AM will try to > reschedule "ABORTED" mapper containers. > - Since such lost mappers are "ABORTED" container, MR AM gives normal mapper > priority (priority=20) to such mapper requests. If there's any pending > reducer (priority=10) at the same time, mapper requests need to wait for > reducer requests satisfied. > - In our test, one mapper needs 700+ MB, reducer needs 1000+ MB, and RM > available resource = mapper-request = (700+ MB), only one job was running in > the system so scheduler cannot allocate more reducer containers AND MR-AM > thinks there're enough headroom for mapper so reducer containers will not be > preempted. > MAPREDUCE-6302 can solve most of the problems, but in the other hand, I think > we may need to exclude scheduled reducers resource when calculating > #available-mapper-slots from headroom. Which we can avoid excessive reducer > preemption. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-6541) Exclude scheduled reducer memory when calculating available mapper slots from headroom to avoid deadlock
[ https://issues.apache.org/jira/browse/MAPREDUCE-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611763#comment-15611763 ] Naganarasimha G R commented on MAPREDUCE-6541: -- Thanks for the contribution [~varun_saxena] and review from [~wangda]. Committed it to trunk, branch-2 & branch-2.8. > Exclude scheduled reducer memory when calculating available mapper slots from > headroom to avoid deadlock > - > > Key: MAPREDUCE-6541 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6541 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Wangda Tan >Assignee: Varun Saxena > Attachments: MAPREDUCE-6541.01.patch, MAPREDUCE-6541.02.patch > > > We saw a MR deadlock recently: > - When NM restarted by framework without enable recovery, containers running > on these nodes will be identified as "ABORTED", and MR AM will try to > reschedule "ABORTED" mapper containers. > - Since such lost mappers are "ABORTED" container, MR AM gives normal mapper > priority (priority=20) to such mapper requests. If there's any pending > reducer (priority=10) at the same time, mapper requests need to wait for > reducer requests satisfied. > - In our test, one mapper needs 700+ MB, reducer needs 1000+ MB, and RM > available resource = mapper-request = (700+ MB), only one job was running in > the system so scheduler cannot allocate more reducer containers AND MR-AM > thinks there're enough headroom for mapper so reducer containers will not be > preempted. > MAPREDUCE-6302 can solve most of the problems, but in the other hand, I think > we may need to exclude scheduled reducers resource when calculating > #available-mapper-slots from headroom. Which we can avoid excessive reducer > preemption. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-6541) Exclude scheduled reducer memory when calculating available mapper slots from headroom to avoid deadlock
[ https://issues.apache.org/jira/browse/MAPREDUCE-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611734#comment-15611734 ] Naganarasimha G R commented on MAPREDUCE-6541: -- not required will commit the patch! > Exclude scheduled reducer memory when calculating available mapper slots from > headroom to avoid deadlock > - > > Key: MAPREDUCE-6541 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6541 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Wangda Tan >Assignee: Varun Saxena > Attachments: MAPREDUCE-6541.01.patch, MAPREDUCE-6541.02.patch > > > We saw a MR deadlock recently: > - When NM restarted by framework without enable recovery, containers running > on these nodes will be identified as "ABORTED", and MR AM will try to > reschedule "ABORTED" mapper containers. > - Since such lost mappers are "ABORTED" container, MR AM gives normal mapper > priority (priority=20) to such mapper requests. If there's any pending > reducer (priority=10) at the same time, mapper requests need to wait for > reducer requests satisfied. > - In our test, one mapper needs 700+ MB, reducer needs 1000+ MB, and RM > available resource = mapper-request = (700+ MB), only one job was running in > the system so scheduler cannot allocate more reducer containers AND MR-AM > thinks there're enough headroom for mapper so reducer containers will not be > preempted. > MAPREDUCE-6302 can solve most of the problems, but in the other hand, I think > we may need to exclude scheduled reducers resource when calculating > #available-mapper-slots from headroom. Which we can avoid excessive reducer > preemption. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-6541) Exclude scheduled reducer memory when calculating available mapper slots from headroom to avoid deadlock
[ https://issues.apache.org/jira/browse/MAPREDUCE-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611608#comment-15611608 ] Varun Saxena commented on MAPREDUCE-6541: - [~Naganarasimha], want me to fix checkstyle ? Most of them (i.e. whitespace after { ) are false negatives > Exclude scheduled reducer memory when calculating available mapper slots from > headroom to avoid deadlock > - > > Key: MAPREDUCE-6541 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6541 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Wangda Tan >Assignee: Varun Saxena > Attachments: MAPREDUCE-6541.01.patch, MAPREDUCE-6541.02.patch > > > We saw a MR deadlock recently: > - When NM restarted by framework without enable recovery, containers running > on these nodes will be identified as "ABORTED", and MR AM will try to > reschedule "ABORTED" mapper containers. > - Since such lost mappers are "ABORTED" container, MR AM gives normal mapper > priority (priority=20) to such mapper requests. If there's any pending > reducer (priority=10) at the same time, mapper requests need to wait for > reducer requests satisfied. > - In our test, one mapper needs 700+ MB, reducer needs 1000+ MB, and RM > available resource = mapper-request = (700+ MB), only one job was running in > the system so scheduler cannot allocate more reducer containers AND MR-AM > thinks there're enough headroom for mapper so reducer containers will not be > preempted. > MAPREDUCE-6302 can solve most of the problems, but in the other hand, I think > we may need to exclude scheduled reducers resource when calculating > #available-mapper-slots from headroom. Which we can avoid excessive reducer > preemption. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-6541) Exclude scheduled reducer memory when calculating available mapper slots from headroom to avoid deadlock
[ https://issues.apache.org/jira/browse/MAPREDUCE-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611597#comment-15611597 ] Hadoop QA commented on MAPREDUCE-6541: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 42s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 34s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s {color} | {color:red} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app: The patch generated 8 new + 226 unchanged - 0 fixed = 234 total (was 226) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 0s {color} | {color:green} hadoop-mapreduce-client-app in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 21m 34s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12835550/MAPREDUCE-6541.02.patch | | JIRA Issue | MAPREDUCE-6541 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 34acfc665c56 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4e403de | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6780/artifact/patchprocess/diff-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt | | Test Results | https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6780/testReport/ | | modules | C: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app U: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app | | Console output | https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6780/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Exclude scheduled reducer memory when calculating available mapper slots from > headroom to avoid deadlock >
[jira] [Commented] (MAPREDUCE-6541) Exclude scheduled reducer memory when calculating available mapper slots from headroom to avoid deadlock
[ https://issues.apache.org/jira/browse/MAPREDUCE-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611536#comment-15611536 ] Varun Saxena commented on MAPREDUCE-6541: - Updated the patch > Exclude scheduled reducer memory when calculating available mapper slots from > headroom to avoid deadlock > - > > Key: MAPREDUCE-6541 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6541 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Wangda Tan >Assignee: Varun Saxena > Attachments: MAPREDUCE-6541.01.patch, MAPREDUCE-6541.02.patch > > > We saw a MR deadlock recently: > - When NM restarted by framework without enable recovery, containers running > on these nodes will be identified as "ABORTED", and MR AM will try to > reschedule "ABORTED" mapper containers. > - Since such lost mappers are "ABORTED" container, MR AM gives normal mapper > priority (priority=20) to such mapper requests. If there's any pending > reducer (priority=10) at the same time, mapper requests need to wait for > reducer requests satisfied. > - In our test, one mapper needs 700+ MB, reducer needs 1000+ MB, and RM > available resource = mapper-request = (700+ MB), only one job was running in > the system so scheduler cannot allocate more reducer containers AND MR-AM > thinks there're enough headroom for mapper so reducer containers will not be > preempted. > MAPREDUCE-6302 can solve most of the problems, but in the other hand, I think > we may need to exclude scheduled reducers resource when calculating > #available-mapper-slots from headroom. Which we can avoid excessive reducer > preemption. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-6541) Exclude scheduled reducer memory when calculating available mapper slots from headroom to avoid deadlock
[ https://issues.apache.org/jira/browse/MAPREDUCE-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611528#comment-15611528 ] Varun Saxena commented on MAPREDUCE-6541: - Sure. Will do it shortly. > Exclude scheduled reducer memory when calculating available mapper slots from > headroom to avoid deadlock > - > > Key: MAPREDUCE-6541 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6541 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Wangda Tan >Assignee: Varun Saxena > Attachments: MAPREDUCE-6541.01.patch > > > We saw a MR deadlock recently: > - When NM restarted by framework without enable recovery, containers running > on these nodes will be identified as "ABORTED", and MR AM will try to > reschedule "ABORTED" mapper containers. > - Since such lost mappers are "ABORTED" container, MR AM gives normal mapper > priority (priority=20) to such mapper requests. If there's any pending > reducer (priority=10) at the same time, mapper requests need to wait for > reducer requests satisfied. > - In our test, one mapper needs 700+ MB, reducer needs 1000+ MB, and RM > available resource = mapper-request = (700+ MB), only one job was running in > the system so scheduler cannot allocate more reducer containers AND MR-AM > thinks there're enough headroom for mapper so reducer containers will not be > preempted. > MAPREDUCE-6302 can solve most of the problems, but in the other hand, I think > we may need to exclude scheduled reducers resource when calculating > #available-mapper-slots from headroom. Which we can avoid excessive reducer > preemption. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-6541) Exclude scheduled reducer memory when calculating available mapper slots from headroom to avoid deadlock
[ https://issues.apache.org/jira/browse/MAPREDUCE-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611494#comment-15611494 ] Naganarasimha G R commented on MAPREDUCE-6541: -- Hi [~varun_saxena], can you please rebase the patch ! > Exclude scheduled reducer memory when calculating available mapper slots from > headroom to avoid deadlock > - > > Key: MAPREDUCE-6541 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6541 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Wangda Tan >Assignee: Varun Saxena > Attachments: MAPREDUCE-6541.01.patch > > > We saw a MR deadlock recently: > - When NM restarted by framework without enable recovery, containers running > on these nodes will be identified as "ABORTED", and MR AM will try to > reschedule "ABORTED" mapper containers. > - Since such lost mappers are "ABORTED" container, MR AM gives normal mapper > priority (priority=20) to such mapper requests. If there's any pending > reducer (priority=10) at the same time, mapper requests need to wait for > reducer requests satisfied. > - In our test, one mapper needs 700+ MB, reducer needs 1000+ MB, and RM > available resource = mapper-request = (700+ MB), only one job was running in > the system so scheduler cannot allocate more reducer containers AND MR-AM > thinks there're enough headroom for mapper so reducer containers will not be > preempted. > MAPREDUCE-6302 can solve most of the problems, but in the other hand, I think > we may need to exclude scheduled reducers resource when calculating > #available-mapper-slots from headroom. Which we can avoid excessive reducer > preemption. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-6541) Exclude scheduled reducer memory when calculating available mapper slots from headroom to avoid deadlock
[ https://issues.apache.org/jira/browse/MAPREDUCE-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15425755#comment-15425755 ] Hadoop QA commented on MAPREDUCE-6541: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s {color} | {color:red} MAPREDUCE-6541 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12771973/MAPREDUCE-6541.01.patch | | JIRA Issue | MAPREDUCE-6541 | | Console output | https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6676/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Exclude scheduled reducer memory when calculating available mapper slots from > headroom to avoid deadlock > - > > Key: MAPREDUCE-6541 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6541 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Wangda Tan >Assignee: Varun Saxena > Attachments: MAPREDUCE-6541.01.patch > > > We saw a MR deadlock recently: > - When NM restarted by framework without enable recovery, containers running > on these nodes will be identified as "ABORTED", and MR AM will try to > reschedule "ABORTED" mapper containers. > - Since such lost mappers are "ABORTED" container, MR AM gives normal mapper > priority (priority=20) to such mapper requests. If there's any pending > reducer (priority=10) at the same time, mapper requests need to wait for > reducer requests satisfied. > - In our test, one mapper needs 700+ MB, reducer needs 1000+ MB, and RM > available resource = mapper-request = (700+ MB), only one job was running in > the system so scheduler cannot allocate more reducer containers AND MR-AM > thinks there're enough headroom for mapper so reducer containers will not be > preempted. > MAPREDUCE-6302 can solve most of the problems, but in the other hand, I think > we may need to exclude scheduled reducers resource when calculating > #available-mapper-slots from headroom. Which we can avoid excessive reducer > preemption. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-6541) Exclude scheduled reducer memory when calculating available mapper slots from headroom to avoid deadlock
[ https://issues.apache.org/jira/browse/MAPREDUCE-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002199#comment-15002199 ] Hadoop QA commented on MAPREDUCE-6541: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s {color} | {color:blue} docker + precommit patch detected. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 3s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s {color} | {color:green} trunk passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s {color} | {color:green} trunk passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s {color} | {color:green} trunk passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s {color} | {color:green} trunk passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s {color} | {color:green} the patch passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 55s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s {color} | {color:green} the patch passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 21s {color} | {color:red} hadoop-mapreduce-client-app in the patch failed with JDK v1.8.0_60. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 36s {color} | {color:green} hadoop-mapreduce-client-app in the patch passed with JDK v1.7.0_79. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s {color} | {color:red} Patch generated 7 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 30m 33s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_60 Timed out junit tests | org.apache.hadoop.mapreduce.v2.app.TestFail | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.7.1 Server=1.7.1 Image:test-patch-base-hadoop-date2015-11-12 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12771973/MAPREDUCE-6541.01.patch | | JIRA Issue | MAPREDUCE-6541 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 2ff195d93dd8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Commented] (MAPREDUCE-6541) Exclude scheduled reducer memory when calculating available mapper slots from headroom to avoid deadlock
[ https://issues.apache.org/jira/browse/MAPREDUCE-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14996998#comment-14996998 ] Wangda Tan commented on MAPREDUCE-6541: --- [~varun_saxena], yes you're correct, updated title/desc. Thanks, > Exclude scheduled reducer memory when calculating available mapper slots from > headroom to avoid deadlock > - > > Key: MAPREDUCE-6541 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6541 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Wangda Tan >Assignee: Varun Saxena > > We saw a MR deadlock recently: > - When NM restarted by framework without enable recovery, containers running > on these nodes will be identified as "ABORTED", and MR AM will try to > reschedule "ABORTED" mapper containers. > - Since such lost mappers are "ABORTED" container, MR AM gives normal mapper > priority (priority=20) to such mapper requests. If there's any pending > reducer (priority=10) at the same time, mapper requests need to wait for > reducer requests satisfied. > - In our test, one mapper needs 700+ MB, reducer needs 1000+ MB, and RM > available resource = mapper-request = (700+ MB), only one job was running in > the system so scheduler cannot allocate more reducer containers AND MR-AM > thinks there're enough headroom for mapper so reducer containers will not be > preempted. > MAPREDUCE-6302 can solve most of the problems, but in the other hand, I think > we may need to exclude scheduled reducers resource when calculating > #available-mapper-slots from headroom. Which we can avoid excessive reducer > preemption. -- This message was sent by Atlassian JIRA (v6.3.4#6332)