[ 
https://issues.apache.org/jira/browse/HADOOP-3604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-3604:
----------------------------------

    Attachment: HADOOP-3604_1_20080624.patch

So it looks like a bug in the jvm causing a lock-up when 2 or more threads call 
'JniEnv::GetDirectBufferAddress' to get the direct-buffer address in the JNI 
layer, especially when the first call is initializing the direct-buffer 
address... 

{noformat}
----------------- 22126 -----------------
22190 0xffffe410 ????????
0x00000002 ????????
0x064e5344 _ZN5Mutex28lock_without_safepoint_checkEv + 0x44
0x06557f57 _ZN20SafepointSynchronize5blockEP10JavaThread + 0xd7
0x0639bfd5 lookupDirectBufferClasses + 0x245
0x0639d5d2 _Z29initializeDirectBufferSupportP7JNIEnv_P10JavaThread + 0x192
0x0639c149 jni_GetDirectBufferAddress + 0x79
0xcf9fdf90 
Java_org_apache_hadoop_io_compress_lzo_LzoDecompressor_decompressBytesDirect + 
0xd0
0xf4d863aa * 
org.apache.hadoop.io.compress.lzo.LzoDecompressor.decompressBytesDirect(int) 
bci:0 (Interpreted frame)
0xf4d7ede9 * 
org.apache.hadoop.io.compress.lzo.LzoDecompressor.decompress(byte[], int, int) 
bci:116 line:273 (Interpreted frame)
0xf4d7f2c5 * 
org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(byte[], int, 
int) bci:49 line:75 (Interpreted frame)
0xf4d7ede9 * org.apache.hadoop.io.compress.DecompressorStream.read(byte[], int, 
int) bci:39 line:74 (Interpreted frame)
0xf4d7ede9 * 
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.getMapOutput(org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputLocation,
 org.apache.hadoop.fs.Path) bci:549 line:1141 (Interpreted frame)
0xf4d7eda7 * 
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.copyOutput(org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputLocation)
 bci:181 line:976 (Interpreted frame)
0xf4d7ee2b * 
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.run() bci:93 
line:925 (Interpreted frame)

----------------- 22176 -----------------
0xffffe410  ????????
0x00000ffc  ????????
0x0639c149  jni_GetDirectBufferAddress + 0x79
0xcf9fdf90  
Java_org_apache_hadoop_io_compress_lzo_LzoDecompressor_decompressBytesDirect + 
0xd0
0xf4d863aa  * 
org.apache.hadoop.io.compress.lzo.LzoDecompressor.decompressBytesDirect(int) 
bci:0 (Interpreted frame)
0xf4d7ede9  * 
org.apache.hadoop.io.compress.lzo.LzoDecompressor.decompress(byte[], int, int) 
bci:116 line:273 (Interpreted frame)
0xf4d7f2c5  * 
org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(byte[], int, 
int) bci:49 line:75 (Interpreted frame)
0xf4d7ede9  * org.apache.hadoop.io.compress.DecompressorStream.read(byte[], 
int, int) bci:39 line:74 (Interpreted frame)
0xf4d7ede9  * 
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.getMapOutput(org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputLocation,
 org.apache.hadoop.fs.Path) bci:549 line:1141 (Interpreted frame)
0xf4d7eda7  * 
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.copyOutput(org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputLocation)
 bci:181 line:976 (Interpreted frame)
0xf4d7ee2b  * 
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.run() bci:93 
line:925 (Interpreted frame)

----------------- 22172 -----------------
0xffffe410  ????????
0x00000ffc  ????????
0x0639c149  jni_GetDirectBufferAddress + 0x79
0xcf9fdf90  
Java_org_apache_hadoop_io_compress_lzo_LzoDecompressor_decompressBytesDirect + 
0xd0
0xf4d863aa  * 
org.apache.hadoop.io.compress.lzo.LzoDecompressor.decompressBytesDirect(int) 
bci:0 (Interpreted frame)
0xf4d7ede9  * 
org.apache.hadoop.io.compress.lzo.LzoDecompressor.decompress(byte[], int, int) 
bci:116 line:273 (Interpreted frame)
0xf4d7f2c5  * 
org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(byte[], int, 
int) bci:49 line:75 (Interpreted frame)
0xf4d7ede9  * org.apache.hadoop.io.compress.DecompressorStream.read(byte[], 
int, int) bci:39 line:74 (Interpreted frame)
0xf4d7ede9  * 
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.getMapOutput(org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputLocation,
 org.apache.hadoop.fs.Path) bci:549 line:1141 (Interpreted frame)
0xf4d7eda7  * 
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.copyOutput(org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputLocation)
 bci:181 line:976 (Interpreted frame)
0xf4d7ee2b  * 
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.run() bci:93 
line:925 (Interpreted frame)

{noformat}

We synchronized access to 'JniEnv::GetDirectBufferAddress'... voila!

Here is a patch which does fine-grained locking of just the call to 
JniEnv::GetDirectBufferAddress, it has minimal performance impact and we see a 
substantial improvement in GridMix with compression on for intermediate 
map-outputs and no more lockups, whee! *smile*

I'm hoping for a response from the post of the java forum...

> Reduce stuck at shuffling phase
> -------------------------------
>
>                 Key: HADOOP-3604
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3604
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.18.0
>            Reporter: Runping Qi
>            Assignee: Arun C Murthy
>            Priority: Blocker
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3604_0_20080623.patch, 
> HADOOP-3604_1_20080624.patch, HADOOP-3604_1_20080624.patch, stack.txt
>
>
> I was running gridmix with Hadoop 0.18.
> I set the map output compression to true.
> Most of the jobs completed just fine.
> Three jobs, however, got stuck.
> Each has one reducer stuck at shuffling phase.
> Here is the log:
> 2008-06-20 00:06:01,264 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
> Initializing JVM Metrics with processName=SHUFFLE, sessionId=
> 2008-06-20 00:06:01,415 INFO org.apache.hadoop.streaming.PipeMapRed: 
> PipeMapRed exec [/bin/cat]
> 2008-06-20 00:06:01,463 INFO org.apache.hadoop.mapred.ReduceTask: 
> ShuffleRamManager: MemoryLimit=134217728, MaxSingleShuffleLimit=33554432
> 2008-06-20 00:06:01,474 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded 
> the native-hadoop library
> 2008-06-20 00:06:01,475 INFO org.apache.hadoop.io.compress.zlib.ZlibFactory: 
> Successfully loaded & initialized native-zlib library
> 2008-06-20 00:06:01,476 INFO org.apache.hadoop.io.compress.CodecPool: Got 
> brand-new decompressor
> 2008-06-20 00:06:01,477 INFO org.apache.hadoop.io.compress.CodecPool: Got 
> brand-new decompressor
> 2008-06-20 00:06:01,477 INFO org.apache.hadoop.io.compress.CodecPool: Got 
> brand-new decompressor
> 2008-06-20 00:06:01,478 INFO org.apache.hadoop.io.compress.CodecPool: Got 
> brand-new decompressor
> 2008-06-20 00:06:01,478 INFO org.apache.hadoop.io.compress.CodecPool: Got 
> brand-new decompressor
> 2008-06-20 00:06:01,486 INFO org.apache.hadoop.io.compress.CodecPool: Got 
> brand-new decompressor
> 2008-06-20 00:06:01,486 INFO org.apache.hadoop.io.compress.CodecPool: Got 
> brand-new decompressor
> 2008-06-20 00:06:01,487 INFO org.apache.hadoop.io.compress.CodecPool: Got 
> brand-new decompressor
> 2008-06-20 00:06:01,487 INFO org.apache.hadoop.io.compress.CodecPool: Got 
> brand-new decompressor
> 2008-06-20 00:06:01,488 INFO org.apache.hadoop.io.compress.CodecPool: Got 
> brand-new decompressor
> 2008-06-20 00:06:01,488 INFO org.apache.hadoop.io.compress.CodecPool: Got 
> brand-new decompressor
> 2008-06-20 00:06:01,489 INFO org.apache.hadoop.io.compress.CodecPool: Got 
> brand-new decompressor
> 2008-06-20 00:06:01,489 INFO org.apache.hadoop.io.compress.CodecPool: Got 
> brand-new decompressor
> 2008-06-20 00:06:01,489 INFO org.apache.hadoop.io.compress.CodecPool: Got 
> brand-new decompressor
> 2008-06-20 00:06:01,493 INFO org.apache.hadoop.io.compress.CodecPool: Got 
> brand-new decompressor
> 2008-06-20 00:06:01,496 INFO org.apache.hadoop.io.compress.CodecPool: Got 
> brand-new decompressor
> 2008-06-20 00:06:01,496 INFO org.apache.hadoop.io.compress.CodecPool: Got 
> brand-new decompressor
> 2008-06-20 00:06:01,496 INFO org.apache.hadoop.io.compress.CodecPool: Got 
> brand-new decompressor
> 2008-06-20 00:06:01,497 INFO org.apache.hadoop.io.compress.CodecPool: Got 
> brand-new decompressor
> 2008-06-20 00:06:01,497 INFO org.apache.hadoop.io.compress.CodecPool: Got 
> brand-new decompressor
> 2008-06-20 00:06:01,500 INFO org.apache.hadoop.mapred.ReduceTask: 
> attempt_200806192318_0450_r_000016_0 Thread started: Thread for merging 
> on-disk files
> 2008-06-20 00:06:01,500 INFO org.apache.hadoop.mapred.ReduceTask: 
> attempt_200806192318_0450_r_000016_0 Thread waiting: Thread for merging 
> on-disk files
> 2008-06-20 00:06:01,502 INFO org.apache.hadoop.mapred.ReduceTask: 
> attempt_200806192318_0450_r_000016_0 Need another 270 map output(s) where 0 
> is already in progress
> 2008-06-20 00:06:01,503 INFO org.apache.hadoop.mapred.ReduceTask: 
> attempt_200806192318_0450_r_000016_0 Thread started: Thread for merging in 
> memory files
> 2008-06-20 00:06:01,503 INFO org.apache.hadoop.mapred.ReduceTask: 
> attempt_200806192318_0450_r_000016_0: Got 0 new map-outputs & number of known 
> map outputs is 0
> 2008-06-20 00:06:01,504 INFO org.apache.hadoop.mapred.ReduceTask: 
> attempt_200806192318_0450_r_000016_0 Scheduled 0 of 0 known outputs (0 slow 
> hosts and 0 dup hosts)
> 2008-06-20 00:06:06,654 INFO org.apache.hadoop.mapred.ReduceTask: 
> attempt_200806192318_0450_r_000016_0: Got 269 new map-outputs & number of 
> known map outputs is 269
> 2008-06-20 00:06:06,656 INFO org.apache.hadoop.mapred.ReduceTask: 
> attempt_200806192318_0450_r_000016_0 Scheduled 229 of 269 known outputs (0 
> slow hosts and 40 dup hosts)
> 2008-06-20 00:06:07,163 INFO org.apache.hadoop.mapred.ReduceTask: Shuffling 
> 846183 bytes (210104 raw bytes) into RAM-FS from 
> attempt_200806192318_0450_m_000089_0
> 2008-06-20 00:06:07,163 INFO org.apache.hadoop.mapred.ReduceTask: Shuffling 
> 820890 bytes (204371 raw bytes) into RAM-FS from 
> attempt_200806192318_0450_m_000083_0
> 2008-06-20 00:06:07,166 INFO org.apache.hadoop.mapred.ReduceTask: Shuffling 
> 835672 bytes (208085 raw bytes) into RAM-FS from 
> attempt_200806192318_0450_m_000122_0

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to