[jira] [Commented] (MAPREDUCE-6471) Document distcp incremental copy
[ https://issues.apache.org/jira/browse/MAPREDUCE-6471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14740161#comment-14740161 ] nijel commented on MAPREDUCE-6471: -- Please feel free to re assign if the work is started > Document distcp incremental copy > - > > Key: MAPREDUCE-6471 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6471 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: distcp >Affects Versions: 2.7.1 >Reporter: Arpit Agarwal >Assignee: nijel > Labels: newbie > > MAPREDUCE-5899 added distcp support for incremental copy with a new > {{append}} flag. > It should be documented. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (MAPREDUCE-6471) Document distcp incremental copy
[ https://issues.apache.org/jira/browse/MAPREDUCE-6471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel reassigned MAPREDUCE-6471: Assignee: nijel > Document distcp incremental copy > - > > Key: MAPREDUCE-6471 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6471 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: distcp >Affects Versions: 2.7.1 >Reporter: Arpit Agarwal >Assignee: nijel > Labels: newbie > > MAPREDUCE-5899 added distcp support for incremental copy with a new > {{append}} flag. > It should be documented. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-6241) Native compilation fails for Checksum.cc due to an incompatibility of assembler register constraint for PowerPC
[ https://issues.apache.org/jira/browse/MAPREDUCE-6241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14740050#comment-14740050 ] Colin Patrick McCabe commented on MAPREDUCE-6241: - We should understand why the code is there even if it copied from somewhere else (actually, especially if it was copied). In general, the organization of bulk_crc32.c could be improved. Having so many ifdefs makes it difficult to figure out what code is actually being called and do reviews. I would like to see the hardware-specific parts into separate files rather than having so many ifdefs in the code. > Native compilation fails for Checksum.cc due to an incompatibility of > assembler register constraint for PowerPC > > > Key: MAPREDUCE-6241 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6241 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: build >Affects Versions: 3.0.0, 2.6.0 > Environment: Debian/Jessie, kernel 3.18.5, ppc64 GNU/Linux > gcc (Debian 4.9.1-19) > protobuf 2.6.1 > OpenJDK Runtime Environment (IcedTea 2.5.3) (7u71-2.5.3-2) > OpenJDK Zero VM (build 24.65-b04, interpreted mode) > source was cloned (and updated) from Apache-Hadoop's git repository >Reporter: Stephan Drescher >Assignee: Binglin Chang > Labels: BB2015-05-TBR, features > Attachments: MAPREDUCE-6241.001.patch, MAPREDUCE-6241.002.patch, > MAPREDUCE-6241.003.patch > > > Issue when using assembler code for performance optimization on the powerpc > platform (compiled for 32bit) > mvn compile -Pnative -DskipTests > [exec] /usr/bin/c++ -Dnativetask_EXPORTS -m32 -DSIMPLE_MEMCPY > -fno-strict-aliasing -Wall -Wno-sign-compare -g -O2 -DNDEBUG -fPIC > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/target/native/javah > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/test > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/target/native > -I/home/hadoop/Java/java7/include -I/home/hadoop/Java/java7/include/linux > -isystem > /home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/gtest/include > -o CMakeFiles/nativetask.dir/main/native/src/util/Checksum.cc.o -c > /home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc > [exec] CMakeFiles/nativetask.dir/build.make:744: recipe for target > 'CMakeFiles/nativetask.dir/main/native/src/util/Checksum.cc.o' failed > [exec] make[2]: Leaving directory > '/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/target/native' > [exec] CMakeFiles/Makefile2:95: recipe for target > 'CMakeFiles/nativetask.dir/all' failed > [exec] make[1]: Leaving directory > '/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/target/native' > [exec] Makefile:76: recipe for target 'all' failed > [exec] > /home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc: > In function ‘void NativeTask::init_cpu_support_flag()’: > /home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc:611:14: > error: impossible register constraint in ‘asm’ > --> > "popl %%ebx" : "=a" (eax), [ebx] "=r"(ebx), "=c"(ecx), "=d"(edx) : "a" > (eax_in) : "cc"); > <-- -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-6241) Native compilation fails for Checksum.cc due to an incompatibility of assembler register constraint for PowerPC
[ https://issues.apache.org/jira/browse/MAPREDUCE-6241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14740031#comment-14740031 ] Binglin Chang commented on MAPREDUCE-6241: -- The code is basically copied from https://github.com/apache/hadoop-common/blob/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c which checks for __GNUC__ > Native compilation fails for Checksum.cc due to an incompatibility of > assembler register constraint for PowerPC > > > Key: MAPREDUCE-6241 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6241 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: build >Affects Versions: 3.0.0, 2.6.0 > Environment: Debian/Jessie, kernel 3.18.5, ppc64 GNU/Linux > gcc (Debian 4.9.1-19) > protobuf 2.6.1 > OpenJDK Runtime Environment (IcedTea 2.5.3) (7u71-2.5.3-2) > OpenJDK Zero VM (build 24.65-b04, interpreted mode) > source was cloned (and updated) from Apache-Hadoop's git repository >Reporter: Stephan Drescher >Assignee: Binglin Chang > Labels: BB2015-05-TBR, features > Attachments: MAPREDUCE-6241.001.patch, MAPREDUCE-6241.002.patch, > MAPREDUCE-6241.003.patch > > > Issue when using assembler code for performance optimization on the powerpc > platform (compiled for 32bit) > mvn compile -Pnative -DskipTests > [exec] /usr/bin/c++ -Dnativetask_EXPORTS -m32 -DSIMPLE_MEMCPY > -fno-strict-aliasing -Wall -Wno-sign-compare -g -O2 -DNDEBUG -fPIC > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/target/native/javah > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/test > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/target/native > -I/home/hadoop/Java/java7/include -I/home/hadoop/Java/java7/include/linux > -isystem > /home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/gtest/include > -o CMakeFiles/nativetask.dir/main/native/src/util/Checksum.cc.o -c > /home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc > [exec] CMakeFiles/nativetask.dir/build.make:744: recipe for target > 'CMakeFiles/nativetask.dir/main/native/src/util/Checksum.cc.o' failed > [exec] make[2]: Leaving directory > '/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/target/native' > [exec] CMakeFiles/Makefile2:95: recipe for target > 'CMakeFiles/nativetask.dir/all' failed > [exec] make[1]: Leaving directory > '/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/target/native' > [exec] Makefile:76: recipe for target 'all' failed > [exec] > /home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc: > In function ‘void NativeTask::init_cpu_support_flag()’: > /home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc:611:14: > error: impossible register constraint in ‘asm’ > --> > "popl %%ebx" : "=a" (eax), [ebx] "=r"(ebx), "=c"(ecx), "=d"(edx) : "a" > (eax_in) : "cc"); > <-- -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-5002) AM could potentially allocate a reduce container to a map attempt
[ https://issues.apache.org/jira/browse/MAPREDUCE-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739726#comment-14739726 ] Hadoop QA commented on MAPREDUCE-5002: -- \\ \\ | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 18m 26s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 1 new or modified test files. | | {color:green}+1{color} | javac | 9m 2s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 11m 12s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 25s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | checkstyle | 0m 44s | There were no new checkstyle issues. | | {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 39s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 38s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 1m 16s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | mapreduce tests | 9m 54s | Tests passed in hadoop-mapreduce-client-app. | | | | 53m 21s | | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12755224/MAPREDUCE-5002.6.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / fbbb7ff | | hadoop-mapreduce-client-app test log | https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5979/artifact/patchprocess/testrun_hadoop-mapreduce-client-app.txt | | Test Results | https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5979/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5979/console | This message was automatically generated. > AM could potentially allocate a reduce container to a map attempt > - > > Key: MAPREDUCE-5002 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5002 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mr-am >Affects Versions: 2.0.3-alpha, 0.23.7, 2.7.0 >Reporter: Jason Lowe >Assignee: Chang Li > Attachments: MAPREDUCE-5002.1.patch, MAPREDUCE-5002.2.patch, > MAPREDUCE-5002.2.patch, MAPREDUCE-5002.3.patch, MAPREDUCE-5002.4.patch, > MAPREDUCE-5002.5.patch, MAPREDUCE-5002.6.patch > > > As discussed in MAPREDUCE-4982, after MAPREDUCE-4893 it is theoretically > possible for the AM to accidentally assign a reducer container to a map > attempt if the AM doesn't find a reduce attempt actively looking for the > container (e.g.: the RM accidentally allocated too many reducer containers). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-6474) ShuffleHandler can possibly exhaust nodemanager file descriptors
[ https://issues.apache.org/jira/browse/MAPREDUCE-6474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739641#comment-14739641 ] Hudson commented on MAPREDUCE-6474: --- FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #354 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/354/]) MAPREDUCE-6474. ShuffleHandler can possibly exhaust nodemanager file descriptors. Contributed by Kuhu Shukla (jlowe: rev 8e615588d5216394d0251a9c97bd706537856c6d) * hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java * hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java * hadoop-mapreduce-project/CHANGES.txt > ShuffleHandler can possibly exhaust nodemanager file descriptors > > > Key: MAPREDUCE-6474 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6474 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2, nodemanager >Affects Versions: 2.5.0 >Reporter: Nathan Roberts >Assignee: Kuhu Shukla > Fix For: 2.7.2 > > Attachments: YARN-2410-v1.patch, YARN-2410-v10.patch, > YARN-2410-v11.patch, YARN-2410-v2.patch, YARN-2410-v3.patch, > YARN-2410-v4.patch, YARN-2410-v5.patch, YARN-2410-v6.patch, > YARN-2410-v7.patch, YARN-2410-v8.patch, YARN-2410-v9.patch > > > The async nature of the shufflehandler can cause it to open a huge number of > file descriptors, when it runs out it crashes. > Scenario: > Job with 6K reduces, slow start set to 0.95, about 40 map outputs per node. > Let's say all 6K reduces hit a node at about same time asking for their > outputs. Each reducer will ask for all 40 map outputs over a single socket in > a > single request (not necessarily all 40 at once, but with coalescing it is > likely to be a large number). > sendMapOutput() will open the file for random reading and then perform an > async transfer of the particular portion of this file(). This will > theoretically > happen 6000*40=24 times which will run the NM out of file descriptors and > cause it to crash. > The algorithm should be refactored a little to not open the fds until they're > actually needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-6474) ShuffleHandler can possibly exhaust nodemanager file descriptors
[ https://issues.apache.org/jira/browse/MAPREDUCE-6474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739624#comment-14739624 ] Hudson commented on MAPREDUCE-6474: --- FAILURE: Integrated in Hadoop-Hdfs-trunk #2293 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2293/]) MAPREDUCE-6474. ShuffleHandler can possibly exhaust nodemanager file descriptors. Contributed by Kuhu Shukla (jlowe: rev 8e615588d5216394d0251a9c97bd706537856c6d) * hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java * hadoop-mapreduce-project/CHANGES.txt * hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java > ShuffleHandler can possibly exhaust nodemanager file descriptors > > > Key: MAPREDUCE-6474 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6474 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2, nodemanager >Affects Versions: 2.5.0 >Reporter: Nathan Roberts >Assignee: Kuhu Shukla > Fix For: 2.7.2 > > Attachments: YARN-2410-v1.patch, YARN-2410-v10.patch, > YARN-2410-v11.patch, YARN-2410-v2.patch, YARN-2410-v3.patch, > YARN-2410-v4.patch, YARN-2410-v5.patch, YARN-2410-v6.patch, > YARN-2410-v7.patch, YARN-2410-v8.patch, YARN-2410-v9.patch > > > The async nature of the shufflehandler can cause it to open a huge number of > file descriptors, when it runs out it crashes. > Scenario: > Job with 6K reduces, slow start set to 0.95, about 40 map outputs per node. > Let's say all 6K reduces hit a node at about same time asking for their > outputs. Each reducer will ask for all 40 map outputs over a single socket in > a > single request (not necessarily all 40 at once, but with coalescing it is > likely to be a large number). > sendMapOutput() will open the file for random reading and then perform an > async transfer of the particular portion of this file(). This will > theoretically > happen 6000*40=24 times which will run the NM out of file descriptors and > cause it to crash. > The algorithm should be refactored a little to not open the fds until they're > actually needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (MAPREDUCE-5002) AM could potentially allocate a reduce container to a map attempt
[ https://issues.apache.org/jira/browse/MAPREDUCE-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chang Li updated MAPREDUCE-5002: Attachment: MAPREDUCE-5002.6.patch fix whitespace issue > AM could potentially allocate a reduce container to a map attempt > - > > Key: MAPREDUCE-5002 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5002 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mr-am >Affects Versions: 2.0.3-alpha, 0.23.7, 2.7.0 >Reporter: Jason Lowe >Assignee: Chang Li > Attachments: MAPREDUCE-5002.1.patch, MAPREDUCE-5002.2.patch, > MAPREDUCE-5002.2.patch, MAPREDUCE-5002.3.patch, MAPREDUCE-5002.4.patch, > MAPREDUCE-5002.5.patch, MAPREDUCE-5002.6.patch > > > As discussed in MAPREDUCE-4982, after MAPREDUCE-4893 it is theoretically > possible for the AM to accidentally assign a reducer container to a map > attempt if the AM doesn't find a reduce attempt actively looking for the > container (e.g.: the RM accidentally allocated too many reducer containers). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-5002) AM could potentially allocate a reduce container to a map attempt
[ https://issues.apache.org/jira/browse/MAPREDUCE-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739564#comment-14739564 ] Hadoop QA commented on MAPREDUCE-5002: -- \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 16m 20s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 1 new or modified test files. | | {color:green}+1{color} | javac | 8m 0s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 12m 9s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 27s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | checkstyle | 0m 35s | There were no new checkstyle issues. | | {color:red}-1{color} | whitespace | 0m 0s | The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix. | | {color:green}+1{color} | install | 1m 40s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 36s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 1m 22s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | mapreduce tests | 10m 46s | Tests passed in hadoop-mapreduce-client-app. | | | | 51m 58s | | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12755196/MAPREDUCE-5002.5.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 7766610 | | whitespace | https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5978/artifact/patchprocess/whitespace.txt | | hadoop-mapreduce-client-app test log | https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5978/artifact/patchprocess/testrun_hadoop-mapreduce-client-app.txt | | Test Results | https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5978/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5978/console | This message was automatically generated. > AM could potentially allocate a reduce container to a map attempt > - > > Key: MAPREDUCE-5002 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5002 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mr-am >Affects Versions: 2.0.3-alpha, 0.23.7, 2.7.0 >Reporter: Jason Lowe >Assignee: Chang Li > Attachments: MAPREDUCE-5002.1.patch, MAPREDUCE-5002.2.patch, > MAPREDUCE-5002.2.patch, MAPREDUCE-5002.3.patch, MAPREDUCE-5002.4.patch, > MAPREDUCE-5002.5.patch > > > As discussed in MAPREDUCE-4982, after MAPREDUCE-4893 it is theoretically > possible for the AM to accidentally assign a reducer container to a map > attempt if the AM doesn't find a reduce attempt actively looking for the > container (e.g.: the RM accidentally allocated too many reducer containers). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-5870) Support for passing Job priority through Application Submission Context in Mapreduce Side
[ https://issues.apache.org/jira/browse/MAPREDUCE-5870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739504#comment-14739504 ] Eric Payne commented on MAPREDUCE-5870: --- {quote} As discussed if option-2 is fine, I will raise a separate ticket in YARN to handle RM-AM update of priority (through heartbeat). And I will separate the JobStatus priority update from this ticket for now. {quote} Thanks, [~sunilg]. If I understand correctly, you will update the patch for this JIRA so that the AM will not query the RM for it's job priority, and then in another JIRA, mkae the change to have the RM tell the AM its priority as part a heartbeat ack. Is that correct? I just want to make sure that this JIRA doesn't add that extra load on the RM for an AM JobStatus query. > Support for passing Job priority through Application Submission Context in > Mapreduce Side > - > > Key: MAPREDUCE-5870 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5870 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: client >Reporter: Sunil G >Assignee: Sunil G > Attachments: 0001-MAPREDUCE-5870.patch, 0002-MAPREDUCE-5870.patch, > 0003-MAPREDUCE-5870.patch, 0004-MAPREDUCE-5870.patch, > 0005-MAPREDUCE-5870.patch, 0006-MAPREDUCE-5870.patch, Yarn-2002.1.patch > > > Job Prioirty can be set from client side as below [Configuration and api]. > a. JobConf.getJobPriority() and > Job.setPriority(JobPriority priority) > b. We can also use configuration > "mapreduce.job.priority". > Now this Job priority can be passed in Application Submission > context from Client side. > Here we can reuse the MRJobConfig.PRIORITY configuration. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-5002) AM could potentially allocate a reduce container to a map attempt
[ https://issues.apache.org/jira/browse/MAPREDUCE-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739476#comment-14739476 ] Hadoop QA commented on MAPREDUCE-5002: -- \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 16m 22s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 1 new or modified test files. | | {color:green}+1{color} | javac | 7m 58s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 10m 30s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 22s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | checkstyle | 0m 33s | There were no new checkstyle issues. | | {color:red}-1{color} | whitespace | 0m 0s | The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix. | | {color:green}+1{color} | install | 1m 27s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 34s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 1m 6s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | mapreduce tests | 9m 20s | Tests passed in hadoop-mapreduce-client-app. | | | | 48m 18s | | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12755195/MAPREDUCE-5002.4.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 7766610 | | whitespace | https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5977/artifact/patchprocess/whitespace.txt | | hadoop-mapreduce-client-app test log | https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5977/artifact/patchprocess/testrun_hadoop-mapreduce-client-app.txt | | Test Results | https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5977/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5977/console | This message was automatically generated. > AM could potentially allocate a reduce container to a map attempt > - > > Key: MAPREDUCE-5002 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5002 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mr-am >Affects Versions: 2.0.3-alpha, 0.23.7, 2.7.0 >Reporter: Jason Lowe >Assignee: Chang Li > Attachments: MAPREDUCE-5002.1.patch, MAPREDUCE-5002.2.patch, > MAPREDUCE-5002.2.patch, MAPREDUCE-5002.3.patch, MAPREDUCE-5002.4.patch, > MAPREDUCE-5002.5.patch > > > As discussed in MAPREDUCE-4982, after MAPREDUCE-4893 it is theoretically > possible for the AM to accidentally assign a reducer container to a map > attempt if the AM doesn't find a reduce attempt actively looking for the > container (e.g.: the RM accidentally allocated too many reducer containers). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-6474) ShuffleHandler can possibly exhaust nodemanager file descriptors
[ https://issues.apache.org/jira/browse/MAPREDUCE-6474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739452#comment-14739452 ] Hudson commented on MAPREDUCE-6474: --- FAILURE: Integrated in Hadoop-Mapreduce-trunk #2316 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2316/]) MAPREDUCE-6474. ShuffleHandler can possibly exhaust nodemanager file descriptors. Contributed by Kuhu Shukla (jlowe: rev 8e615588d5216394d0251a9c97bd706537856c6d) * hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java * hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java * hadoop-mapreduce-project/CHANGES.txt > ShuffleHandler can possibly exhaust nodemanager file descriptors > > > Key: MAPREDUCE-6474 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6474 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2, nodemanager >Affects Versions: 2.5.0 >Reporter: Nathan Roberts >Assignee: Kuhu Shukla > Fix For: 2.7.2 > > Attachments: YARN-2410-v1.patch, YARN-2410-v10.patch, > YARN-2410-v11.patch, YARN-2410-v2.patch, YARN-2410-v3.patch, > YARN-2410-v4.patch, YARN-2410-v5.patch, YARN-2410-v6.patch, > YARN-2410-v7.patch, YARN-2410-v8.patch, YARN-2410-v9.patch > > > The async nature of the shufflehandler can cause it to open a huge number of > file descriptors, when it runs out it crashes. > Scenario: > Job with 6K reduces, slow start set to 0.95, about 40 map outputs per node. > Let's say all 6K reduces hit a node at about same time asking for their > outputs. Each reducer will ask for all 40 map outputs over a single socket in > a > single request (not necessarily all 40 at once, but with coalescing it is > likely to be a large number). > sendMapOutput() will open the file for random reading and then perform an > async transfer of the particular portion of this file(). This will > theoretically > happen 6000*40=24 times which will run the NM out of file descriptors and > cause it to crash. > The algorithm should be refactored a little to not open the fds until they're > actually needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (MAPREDUCE-5649) Reduce cannot use more than 2G memory for the final merge
[ https://issues.apache.org/jira/browse/MAPREDUCE-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated MAPREDUCE-5649: --- Fix Version/s: 2.7.2 Just pulled this into branch-2.7 (release 2.7.2) as it already exists in 2.6.1. branch-2 patch applies cleanly. Ran compilation and TestMergeManager before the push. > Reduce cannot use more than 2G memory for the final merge > -- > > Key: MAPREDUCE-5649 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5649 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Reporter: stanley shi >Assignee: Gera Shegalov > Labels: 2.6.1-candidate, 2.7.2-candidate > Fix For: 2.6.1, 2.8.0, 2.7.2 > > Attachments: MAPREDUCE-5649.001.patch, MAPREDUCE-5649.002.patch, > MAPREDUCE-5649.003.patch > > > In the org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.java file, in > the finalMerge method: > int maxInMemReduce = (int)Math.min( > Runtime.getRuntime().maxMemory() * maxRedPer, Integer.MAX_VALUE); > > This means no matter how much memory user has, reducer will not retain more > than 2G data in memory before the reduce phase starts. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-6241) Native compilation fails for Checksum.cc due to an incompatibility of assembler register constraint for PowerPC
[ https://issues.apache.org/jira/browse/MAPREDUCE-6241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739390#comment-14739390 ] Colin Patrick McCabe commented on MAPREDUCE-6241: - Why is the patch checking for __GNUC__? > Native compilation fails for Checksum.cc due to an incompatibility of > assembler register constraint for PowerPC > > > Key: MAPREDUCE-6241 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6241 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: build >Affects Versions: 3.0.0, 2.6.0 > Environment: Debian/Jessie, kernel 3.18.5, ppc64 GNU/Linux > gcc (Debian 4.9.1-19) > protobuf 2.6.1 > OpenJDK Runtime Environment (IcedTea 2.5.3) (7u71-2.5.3-2) > OpenJDK Zero VM (build 24.65-b04, interpreted mode) > source was cloned (and updated) from Apache-Hadoop's git repository >Reporter: Stephan Drescher >Assignee: Binglin Chang > Labels: BB2015-05-TBR, features > Attachments: MAPREDUCE-6241.001.patch, MAPREDUCE-6241.002.patch, > MAPREDUCE-6241.003.patch > > > Issue when using assembler code for performance optimization on the powerpc > platform (compiled for 32bit) > mvn compile -Pnative -DskipTests > [exec] /usr/bin/c++ -Dnativetask_EXPORTS -m32 -DSIMPLE_MEMCPY > -fno-strict-aliasing -Wall -Wno-sign-compare -g -O2 -DNDEBUG -fPIC > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/target/native/javah > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/test > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/target/native > -I/home/hadoop/Java/java7/include -I/home/hadoop/Java/java7/include/linux > -isystem > /home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/gtest/include > -o CMakeFiles/nativetask.dir/main/native/src/util/Checksum.cc.o -c > /home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc > [exec] CMakeFiles/nativetask.dir/build.make:744: recipe for target > 'CMakeFiles/nativetask.dir/main/native/src/util/Checksum.cc.o' failed > [exec] make[2]: Leaving directory > '/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/target/native' > [exec] CMakeFiles/Makefile2:95: recipe for target > 'CMakeFiles/nativetask.dir/all' failed > [exec] make[1]: Leaving directory > '/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/target/native' > [exec] Makefile:76: recipe for target 'all' failed > [exec] > /home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc: > In function ‘void NativeTask::init_cpu_support_flag()’: > /home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc:611:14: > error: impossible register constraint in ‘asm’ > --> > "popl %%ebx" : "=a" (eax), [ebx] "=r"(ebx), "=c"(ecx), "=d"(edx) : "a" > (eax_in) : "cc"); > <-- -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (MAPREDUCE-5002) AM could potentially allocate a reduce container to a map attempt
[ https://issues.apache.org/jira/browse/MAPREDUCE-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chang Li updated MAPREDUCE-5002: Attachment: MAPREDUCE-5002.5.patch .5 patch improve some naming and comment > AM could potentially allocate a reduce container to a map attempt > - > > Key: MAPREDUCE-5002 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5002 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mr-am >Affects Versions: 2.0.3-alpha, 0.23.7, 2.7.0 >Reporter: Jason Lowe >Assignee: Chang Li > Attachments: MAPREDUCE-5002.1.patch, MAPREDUCE-5002.2.patch, > MAPREDUCE-5002.2.patch, MAPREDUCE-5002.3.patch, MAPREDUCE-5002.4.patch, > MAPREDUCE-5002.5.patch > > > As discussed in MAPREDUCE-4982, after MAPREDUCE-4893 it is theoretically > possible for the AM to accidentally assign a reducer container to a map > attempt if the AM doesn't find a reduce attempt actively looking for the > container (e.g.: the RM accidentally allocated too many reducer containers). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-6474) ShuffleHandler can possibly exhaust nodemanager file descriptors
[ https://issues.apache.org/jira/browse/MAPREDUCE-6474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739382#comment-14739382 ] Hudson commented on MAPREDUCE-6474: --- FAILURE: Integrated in Hadoop-Yarn-trunk #1106 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/1106/]) MAPREDUCE-6474. ShuffleHandler can possibly exhaust nodemanager file descriptors. Contributed by Kuhu Shukla (jlowe: rev 8e615588d5216394d0251a9c97bd706537856c6d) * hadoop-mapreduce-project/CHANGES.txt * hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java * hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java > ShuffleHandler can possibly exhaust nodemanager file descriptors > > > Key: MAPREDUCE-6474 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6474 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2, nodemanager >Affects Versions: 2.5.0 >Reporter: Nathan Roberts >Assignee: Kuhu Shukla > Fix For: 2.7.2 > > Attachments: YARN-2410-v1.patch, YARN-2410-v10.patch, > YARN-2410-v11.patch, YARN-2410-v2.patch, YARN-2410-v3.patch, > YARN-2410-v4.patch, YARN-2410-v5.patch, YARN-2410-v6.patch, > YARN-2410-v7.patch, YARN-2410-v8.patch, YARN-2410-v9.patch > > > The async nature of the shufflehandler can cause it to open a huge number of > file descriptors, when it runs out it crashes. > Scenario: > Job with 6K reduces, slow start set to 0.95, about 40 map outputs per node. > Let's say all 6K reduces hit a node at about same time asking for their > outputs. Each reducer will ask for all 40 map outputs over a single socket in > a > single request (not necessarily all 40 at once, but with coalescing it is > likely to be a large number). > sendMapOutput() will open the file for random reading and then perform an > async transfer of the particular portion of this file(). This will > theoretically > happen 6000*40=24 times which will run the NM out of file descriptors and > cause it to crash. > The algorithm should be refactored a little to not open the fds until they're > actually needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-5002) AM could potentially allocate a reduce container to a map attempt
[ https://issues.apache.org/jira/browse/MAPREDUCE-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739369#comment-14739369 ] Chang Li commented on MAPREDUCE-5002: - Thanks [~jlowe] for review! Have reworked my unit test. Please help review the updated patch. Thanks! > AM could potentially allocate a reduce container to a map attempt > - > > Key: MAPREDUCE-5002 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5002 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mr-am >Affects Versions: 2.0.3-alpha, 0.23.7, 2.7.0 >Reporter: Jason Lowe >Assignee: Chang Li > Attachments: MAPREDUCE-5002.1.patch, MAPREDUCE-5002.2.patch, > MAPREDUCE-5002.2.patch, MAPREDUCE-5002.3.patch, MAPREDUCE-5002.4.patch > > > As discussed in MAPREDUCE-4982, after MAPREDUCE-4893 it is theoretically > possible for the AM to accidentally assign a reducer container to a map > attempt if the AM doesn't find a reduce attempt actively looking for the > container (e.g.: the RM accidentally allocated too many reducer containers). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (MAPREDUCE-5002) AM could potentially allocate a reduce container to a map attempt
[ https://issues.apache.org/jira/browse/MAPREDUCE-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chang Li updated MAPREDUCE-5002: Attachment: MAPREDUCE-5002.4.patch there was a debugging print in .3 patch I forgot to delete. Upload .4 patch fix that > AM could potentially allocate a reduce container to a map attempt > - > > Key: MAPREDUCE-5002 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5002 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mr-am >Affects Versions: 2.0.3-alpha, 0.23.7, 2.7.0 >Reporter: Jason Lowe >Assignee: Chang Li > Attachments: MAPREDUCE-5002.1.patch, MAPREDUCE-5002.2.patch, > MAPREDUCE-5002.2.patch, MAPREDUCE-5002.3.patch, MAPREDUCE-5002.4.patch > > > As discussed in MAPREDUCE-4982, after MAPREDUCE-4893 it is theoretically > possible for the AM to accidentally assign a reducer container to a map > attempt if the AM doesn't find a reduce attempt actively looking for the > container (e.g.: the RM accidentally allocated too many reducer containers). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-5870) Support for passing Job priority through Application Submission Context in Mapreduce Side
[ https://issues.apache.org/jira/browse/MAPREDUCE-5870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739298#comment-14739298 ] Sunil G commented on MAPREDUCE-5870: Hi [~jlowe] [~eepayne] [~jianhe] As discussed if option-2 is fine, I will raise a separate ticket in YARN to handle RM-AM update of priority (through heartbeat). And I will separate the JobStatus priority update from this ticket for now. And once YARN is done, we can handle JobStatus priority update in correct way. Could you share your opinion on same. > Support for passing Job priority through Application Submission Context in > Mapreduce Side > - > > Key: MAPREDUCE-5870 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5870 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: client >Reporter: Sunil G >Assignee: Sunil G > Attachments: 0001-MAPREDUCE-5870.patch, 0002-MAPREDUCE-5870.patch, > 0003-MAPREDUCE-5870.patch, 0004-MAPREDUCE-5870.patch, > 0005-MAPREDUCE-5870.patch, 0006-MAPREDUCE-5870.patch, Yarn-2002.1.patch > > > Job Prioirty can be set from client side as below [Configuration and api]. > a. JobConf.getJobPriority() and > Job.setPriority(JobPriority priority) > b. We can also use configuration > "mapreduce.job.priority". > Now this Job priority can be passed in Application Submission > context from Client side. > Here we can reuse the MRJobConfig.PRIORITY configuration. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (MAPREDUCE-5002) AM could potentially allocate a reduce container to a map attempt
[ https://issues.apache.org/jira/browse/MAPREDUCE-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chang Li updated MAPREDUCE-5002: Attachment: MAPREDUCE-5002.3.patch > AM could potentially allocate a reduce container to a map attempt > - > > Key: MAPREDUCE-5002 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5002 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mr-am >Affects Versions: 2.0.3-alpha, 0.23.7, 2.7.0 >Reporter: Jason Lowe >Assignee: Chang Li > Attachments: MAPREDUCE-5002.1.patch, MAPREDUCE-5002.2.patch, > MAPREDUCE-5002.2.patch, MAPREDUCE-5002.3.patch > > > As discussed in MAPREDUCE-4982, after MAPREDUCE-4893 it is theoretically > possible for the AM to accidentally assign a reducer container to a map > attempt if the AM doesn't find a reduce attempt actively looking for the > container (e.g.: the RM accidentally allocated too many reducer containers). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-6474) ShuffleHandler can possibly exhaust nodemanager file descriptors
[ https://issues.apache.org/jira/browse/MAPREDUCE-6474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739308#comment-14739308 ] Hudson commented on MAPREDUCE-6474: --- FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #374 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/374/]) MAPREDUCE-6474. ShuffleHandler can possibly exhaust nodemanager file descriptors. Contributed by Kuhu Shukla (jlowe: rev 8e615588d5216394d0251a9c97bd706537856c6d) * hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java * hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java * hadoop-mapreduce-project/CHANGES.txt > ShuffleHandler can possibly exhaust nodemanager file descriptors > > > Key: MAPREDUCE-6474 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6474 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2, nodemanager >Affects Versions: 2.5.0 >Reporter: Nathan Roberts >Assignee: Kuhu Shukla > Fix For: 2.7.2 > > Attachments: YARN-2410-v1.patch, YARN-2410-v10.patch, > YARN-2410-v11.patch, YARN-2410-v2.patch, YARN-2410-v3.patch, > YARN-2410-v4.patch, YARN-2410-v5.patch, YARN-2410-v6.patch, > YARN-2410-v7.patch, YARN-2410-v8.patch, YARN-2410-v9.patch > > > The async nature of the shufflehandler can cause it to open a huge number of > file descriptors, when it runs out it crashes. > Scenario: > Job with 6K reduces, slow start set to 0.95, about 40 map outputs per node. > Let's say all 6K reduces hit a node at about same time asking for their > outputs. Each reducer will ask for all 40 map outputs over a single socket in > a > single request (not necessarily all 40 at once, but with coalescing it is > likely to be a large number). > sendMapOutput() will open the file for random reading and then perform an > async transfer of the particular portion of this file(). This will > theoretically > happen 6000*40=24 times which will run the NM out of file descriptors and > cause it to crash. > The algorithm should be refactored a little to not open the fds until they're > actually needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-6474) ShuffleHandler can possibly exhaust nodemanager file descriptors
[ https://issues.apache.org/jira/browse/MAPREDUCE-6474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739289#comment-14739289 ] Hudson commented on MAPREDUCE-6474: --- FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #368 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/368/]) MAPREDUCE-6474. ShuffleHandler can possibly exhaust nodemanager file descriptors. Contributed by Kuhu Shukla (jlowe: rev 8e615588d5216394d0251a9c97bd706537856c6d) * hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java * hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java * hadoop-mapreduce-project/CHANGES.txt > ShuffleHandler can possibly exhaust nodemanager file descriptors > > > Key: MAPREDUCE-6474 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6474 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2, nodemanager >Affects Versions: 2.5.0 >Reporter: Nathan Roberts >Assignee: Kuhu Shukla > Fix For: 2.7.2 > > Attachments: YARN-2410-v1.patch, YARN-2410-v10.patch, > YARN-2410-v11.patch, YARN-2410-v2.patch, YARN-2410-v3.patch, > YARN-2410-v4.patch, YARN-2410-v5.patch, YARN-2410-v6.patch, > YARN-2410-v7.patch, YARN-2410-v8.patch, YARN-2410-v9.patch > > > The async nature of the shufflehandler can cause it to open a huge number of > file descriptors, when it runs out it crashes. > Scenario: > Job with 6K reduces, slow start set to 0.95, about 40 map outputs per node. > Let's say all 6K reduces hit a node at about same time asking for their > outputs. Each reducer will ask for all 40 map outputs over a single socket in > a > single request (not necessarily all 40 at once, but with coalescing it is > likely to be a large number). > sendMapOutput() will open the file for random reading and then perform an > async transfer of the particular portion of this file(). This will > theoretically > happen 6000*40=24 times which will run the NM out of file descriptors and > cause it to crash. > The algorithm should be refactored a little to not open the fds until they're > actually needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-6474) ShuffleHandler can possibly exhaust nodemanager file descriptors
[ https://issues.apache.org/jira/browse/MAPREDUCE-6474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739033#comment-14739033 ] Hudson commented on MAPREDUCE-6474: --- FAILURE: Integrated in Hadoop-trunk-Commit #8429 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/8429/]) MAPREDUCE-6474. ShuffleHandler can possibly exhaust nodemanager file descriptors. Contributed by Kuhu Shukla (jlowe: rev 8e615588d5216394d0251a9c97bd706537856c6d) * hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java * hadoop-mapreduce-project/CHANGES.txt * hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java > ShuffleHandler can possibly exhaust nodemanager file descriptors > > > Key: MAPREDUCE-6474 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6474 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2, nodemanager >Affects Versions: 2.5.0 >Reporter: Nathan Roberts >Assignee: Kuhu Shukla > Fix For: 2.7.2 > > Attachments: YARN-2410-v1.patch, YARN-2410-v10.patch, > YARN-2410-v11.patch, YARN-2410-v2.patch, YARN-2410-v3.patch, > YARN-2410-v4.patch, YARN-2410-v5.patch, YARN-2410-v6.patch, > YARN-2410-v7.patch, YARN-2410-v8.patch, YARN-2410-v9.patch > > > The async nature of the shufflehandler can cause it to open a huge number of > file descriptors, when it runs out it crashes. > Scenario: > Job with 6K reduces, slow start set to 0.95, about 40 map outputs per node. > Let's say all 6K reduces hit a node at about same time asking for their > outputs. Each reducer will ask for all 40 map outputs over a single socket in > a > single request (not necessarily all 40 at once, but with coalescing it is > likely to be a large number). > sendMapOutput() will open the file for random reading and then perform an > async transfer of the particular portion of this file(). This will > theoretically > happen 6000*40=24 times which will run the NM out of file descriptors and > cause it to crash. > The algorithm should be refactored a little to not open the fds until they're > actually needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (MAPREDUCE-6474) ShuffleHandler can possibly exhaust nodemanager file descriptors
[ https://issues.apache.org/jira/browse/MAPREDUCE-6474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated MAPREDUCE-6474: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.7.2 Status: Resolved (was: Patch Available) Thanks to Kuhu for the contribution and to Nathan for additional review! I committed this to trunk, branch-2, and branch-2.7. > ShuffleHandler can possibly exhaust nodemanager file descriptors > > > Key: MAPREDUCE-6474 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6474 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2, nodemanager >Affects Versions: 2.5.0 >Reporter: Nathan Roberts >Assignee: Kuhu Shukla > Fix For: 2.7.2 > > Attachments: YARN-2410-v1.patch, YARN-2410-v10.patch, > YARN-2410-v11.patch, YARN-2410-v2.patch, YARN-2410-v3.patch, > YARN-2410-v4.patch, YARN-2410-v5.patch, YARN-2410-v6.patch, > YARN-2410-v7.patch, YARN-2410-v8.patch, YARN-2410-v9.patch > > > The async nature of the shufflehandler can cause it to open a huge number of > file descriptors, when it runs out it crashes. > Scenario: > Job with 6K reduces, slow start set to 0.95, about 40 map outputs per node. > Let's say all 6K reduces hit a node at about same time asking for their > outputs. Each reducer will ask for all 40 map outputs over a single socket in > a > single request (not necessarily all 40 at once, but with coalescing it is > likely to be a large number). > sendMapOutput() will open the file for random reading and then perform an > async transfer of the particular portion of this file(). This will > theoretically > happen 6000*40=24 times which will run the NM out of file descriptors and > cause it to crash. > The algorithm should be refactored a little to not open the fds until they're > actually needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (MAPREDUCE-6474) ShuffleHandler can possibly exhaust nodemanager file descriptors
[ https://issues.apache.org/jira/browse/MAPREDUCE-6474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated MAPREDUCE-6474: -- Summary: ShuffleHandler can possibly exhaust nodemanager file descriptors (was: Nodemanager ShuffleHandler can possible exhaust file descriptors) > ShuffleHandler can possibly exhaust nodemanager file descriptors > > > Key: MAPREDUCE-6474 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6474 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2, nodemanager >Affects Versions: 2.5.0 >Reporter: Nathan Roberts >Assignee: Kuhu Shukla > Attachments: YARN-2410-v1.patch, YARN-2410-v10.patch, > YARN-2410-v11.patch, YARN-2410-v2.patch, YARN-2410-v3.patch, > YARN-2410-v4.patch, YARN-2410-v5.patch, YARN-2410-v6.patch, > YARN-2410-v7.patch, YARN-2410-v8.patch, YARN-2410-v9.patch > > > The async nature of the shufflehandler can cause it to open a huge number of > file descriptors, when it runs out it crashes. > Scenario: > Job with 6K reduces, slow start set to 0.95, about 40 map outputs per node. > Let's say all 6K reduces hit a node at about same time asking for their > outputs. Each reducer will ask for all 40 map outputs over a single socket in > a > single request (not necessarily all 40 at once, but with coalescing it is > likely to be a large number). > sendMapOutput() will open the file for random reading and then perform an > async transfer of the particular portion of this file(). This will > theoretically > happen 6000*40=24 times which will run the NM out of file descriptors and > cause it to crash. > The algorithm should be refactored a little to not open the fds until they're > actually needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Moved] (MAPREDUCE-6474) Nodemanager ShuffleHandler can possible exhaust file descriptors
[ https://issues.apache.org/jira/browse/MAPREDUCE-6474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe moved YARN-2410 to MAPREDUCE-6474: - Affects Version/s: (was: 2.5.0) 2.5.0 Target Version/s: 2.7.2 (was: 2.7.2) Component/s: (was: nodemanager) nodemanager mrv2 Key: MAPREDUCE-6474 (was: YARN-2410) Project: Hadoop Map/Reduce (was: Hadoop YARN) > Nodemanager ShuffleHandler can possible exhaust file descriptors > > > Key: MAPREDUCE-6474 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6474 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2, nodemanager >Affects Versions: 2.5.0 >Reporter: Nathan Roberts >Assignee: Kuhu Shukla > Attachments: YARN-2410-v1.patch, YARN-2410-v10.patch, > YARN-2410-v11.patch, YARN-2410-v2.patch, YARN-2410-v3.patch, > YARN-2410-v4.patch, YARN-2410-v5.patch, YARN-2410-v6.patch, > YARN-2410-v7.patch, YARN-2410-v8.patch, YARN-2410-v9.patch > > > The async nature of the shufflehandler can cause it to open a huge number of > file descriptors, when it runs out it crashes. > Scenario: > Job with 6K reduces, slow start set to 0.95, about 40 map outputs per node. > Let's say all 6K reduces hit a node at about same time asking for their > outputs. Each reducer will ask for all 40 map outputs over a single socket in > a > single request (not necessarily all 40 at once, but with coalescing it is > likely to be a large number). > sendMapOutput() will open the file for random reading and then perform an > async transfer of the particular portion of this file(). This will > theoretically > happen 6000*40=24 times which will run the NM out of file descriptors and > cause it to crash. > The algorithm should be refactored a little to not open the fds until they're > actually needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (MAPREDUCE-6241) Native compilation fails for Checksum.cc due to an incompatibility of assembler register constraint for PowerPC
[ https://issues.apache.org/jira/browse/MAPREDUCE-6241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayappan updated MAPREDUCE-6241: --- Priority: Major (was: Minor) > Native compilation fails for Checksum.cc due to an incompatibility of > assembler register constraint for PowerPC > > > Key: MAPREDUCE-6241 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6241 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: build >Affects Versions: 3.0.0, 2.6.0 > Environment: Debian/Jessie, kernel 3.18.5, ppc64 GNU/Linux > gcc (Debian 4.9.1-19) > protobuf 2.6.1 > OpenJDK Runtime Environment (IcedTea 2.5.3) (7u71-2.5.3-2) > OpenJDK Zero VM (build 24.65-b04, interpreted mode) > source was cloned (and updated) from Apache-Hadoop's git repository >Reporter: Stephan Drescher >Assignee: Binglin Chang > Labels: BB2015-05-TBR, features > Attachments: MAPREDUCE-6241.001.patch, MAPREDUCE-6241.002.patch, > MAPREDUCE-6241.003.patch > > > Issue when using assembler code for performance optimization on the powerpc > platform (compiled for 32bit) > mvn compile -Pnative -DskipTests > [exec] /usr/bin/c++ -Dnativetask_EXPORTS -m32 -DSIMPLE_MEMCPY > -fno-strict-aliasing -Wall -Wno-sign-compare -g -O2 -DNDEBUG -fPIC > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/target/native/javah > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/test > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/target/native > -I/home/hadoop/Java/java7/include -I/home/hadoop/Java/java7/include/linux > -isystem > /home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/gtest/include > -o CMakeFiles/nativetask.dir/main/native/src/util/Checksum.cc.o -c > /home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc > [exec] CMakeFiles/nativetask.dir/build.make:744: recipe for target > 'CMakeFiles/nativetask.dir/main/native/src/util/Checksum.cc.o' failed > [exec] make[2]: Leaving directory > '/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/target/native' > [exec] CMakeFiles/Makefile2:95: recipe for target > 'CMakeFiles/nativetask.dir/all' failed > [exec] make[1]: Leaving directory > '/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/target/native' > [exec] Makefile:76: recipe for target 'all' failed > [exec] > /home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc: > In function ‘void NativeTask::init_cpu_support_flag()’: > /home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc:611:14: > error: impossible register constraint in ‘asm’ > --> > "popl %%ebx" : "=a" (eax), [ebx] "=r"(ebx), "=c"(ecx), "=d"(edx) : "a" > (eax_in) : "cc"); > <-- -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-6468) Consistent log severity level guards and statements in MapReduce project
[ https://issues.apache.org/jira/browse/MAPREDUCE-6468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14738378#comment-14738378 ] Jagadesh Kiran N commented on MAPREDUCE-6468: - Test case failures are not related to the Patch. [~ozawa] please review > Consistent log severity level guards and statements in MapReduce project > > > Key: MAPREDUCE-6468 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6468 > Project: Hadoop Map/Reduce > Issue Type: Improvement >Reporter: Jackie Chang >Assignee: Jagadesh Kiran N >Priority: Minor > Labels: BB2015-05-TBR > Attachments: HADOOP-9995-00.patch, HADOOP-9995.patch, > MAPREDUCE-6468-01.patch, MAPREDUCE-6468-02.patch, MAPREDUCE-6468-03.patch, > MAPREDUCE-6468-04.patch > > > Developers use logs to do in-house debugging. These log statements are later > demoted to less severe levels and usually are guarded by their matching > severity levels. However, we do see inconsistencies in trunk. A log statement > like > {code} >if (LOG.isDebugEnabled()) { > LOG.info("Assigned container (" + allocated + ") " > {code} > doesn't make much sense because the log message is actually only printed out > in DEBUG-level. We do see previous issues tried to correct this > inconsistency. I am proposing a comprehensive correction over trunk. > Doug Cutting pointed it out in HADOOP-312: > https://issues.apache.org/jira/browse/HADOOP-312?focusedCommentId=12429498&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12429498 > HDFS-1611 also corrected this inconsistency. > This could have been avoided by switching from log4j to slf4j's {} format > like CASSANDRA-625 (2010/3) and ZOOKEEPER-850 (2012/1), which gives cleaner > code and slightly higher performance. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-6241) Native compilation fails for Checksum.cc due to an incompatibility of assembler register constraint for PowerPC
[ https://issues.apache.org/jira/browse/MAPREDUCE-6241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14738327#comment-14738327 ] Ayappan commented on MAPREDUCE-6241: Any update here ?. This issue is lingering around for a long time. > Native compilation fails for Checksum.cc due to an incompatibility of > assembler register constraint for PowerPC > > > Key: MAPREDUCE-6241 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6241 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: build >Affects Versions: 3.0.0, 2.6.0 > Environment: Debian/Jessie, kernel 3.18.5, ppc64 GNU/Linux > gcc (Debian 4.9.1-19) > protobuf 2.6.1 > OpenJDK Runtime Environment (IcedTea 2.5.3) (7u71-2.5.3-2) > OpenJDK Zero VM (build 24.65-b04, interpreted mode) > source was cloned (and updated) from Apache-Hadoop's git repository >Reporter: Stephan Drescher >Assignee: Binglin Chang >Priority: Minor > Labels: BB2015-05-TBR, features > Attachments: MAPREDUCE-6241.001.patch, MAPREDUCE-6241.002.patch, > MAPREDUCE-6241.003.patch > > > Issue when using assembler code for performance optimization on the powerpc > platform (compiled for 32bit) > mvn compile -Pnative -DskipTests > [exec] /usr/bin/c++ -Dnativetask_EXPORTS -m32 -DSIMPLE_MEMCPY > -fno-strict-aliasing -Wall -Wno-sign-compare -g -O2 -DNDEBUG -fPIC > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/target/native/javah > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/test > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src > > -I/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/target/native > -I/home/hadoop/Java/java7/include -I/home/hadoop/Java/java7/include/linux > -isystem > /home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/gtest/include > -o CMakeFiles/nativetask.dir/main/native/src/util/Checksum.cc.o -c > /home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc > [exec] CMakeFiles/nativetask.dir/build.make:744: recipe for target > 'CMakeFiles/nativetask.dir/main/native/src/util/Checksum.cc.o' failed > [exec] make[2]: Leaving directory > '/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/target/native' > [exec] CMakeFiles/Makefile2:95: recipe for target > 'CMakeFiles/nativetask.dir/all' failed > [exec] make[1]: Leaving directory > '/home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/target/native' > [exec] Makefile:76: recipe for target 'all' failed > [exec] > /home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc: > In function ‘void NativeTask::init_cpu_support_flag()’: > /home/hadoop/Developer/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc:611:14: > error: impossible register constraint in ‘asm’ > --> > "popl %%ebx" : "=a" (eax), [ebx] "=r"(ebx), "=c"(ecx), "=d"(edx) : "a" > (eax_in) : "cc"); > <-- -- This message was sent by Atlassian JIRA (v6.3.4#6332)