[ 
https://issues.apache.org/jira/browse/HADOOP-10027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340801#comment-14340801
 ] 

Chris Nauroth commented on HADOOP-10027:
----------------------------------------

[~huizane], thank you for the patch.  This looks like the right approach.  I 
did a closer review of HADOOP-3604, and the linked Java bugs are quite old.  I 
don't believe any realistic deployment would still be running on such old Java 
versions.

This same code pattern is in all of the native compression codecs, likely due 
to copy-paste.  To make this patch comprehensive, let's update all of them: 
bzip2, lz4, snappy and zlib.

The new test is a good idea, but I think it needs some changes.  As written, it 
starts 10 threads, but then JUnit will leave execution of 
{{testZlibCompressDecompressInMultiThreads}} before those threads really 
complete.  If an exception is thrown from within the background thread, there 
is no reporting back to the main JUnit thread.  Because of those 2 things, 
unexpected failures on the background threads wouldn't actually show up as 
JUnit failures.  To fix this, I think you'll need to capture the {{Thread}} 
instances in array, {{join}} to all of them at the end of the test, and also 
work out a way to propagate possible exceptions out of those threads.  There is 
a helper class at {{org.apache.hadoop.test.MultithreadedTestUtil}} that might 
help you implement this.

> *Compressor_deflateBytesDirect passes instance instead of jclass to 
> GetStaticObjectField
> ----------------------------------------------------------------------------------------
>
>                 Key: HADOOP-10027
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10027
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: native
>            Reporter: Eric Abbott
>            Assignee: Hui Zheng
>            Priority: Minor
>         Attachments: HADOOP-10027.1.patch, HADOOP-10027.2.patch
>
>
> http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c?view=markup
> This pattern appears in all the native compressors.
>     // Get members of ZlibCompressor
>     jobject clazz = (*env)->GetStaticObjectField(env, this,
>                                                  ZlibCompressor_clazz);
> The 2nd argument to GetStaticObjectField is supposed to be a jclass, not a 
> jobject. Adding the JVM param -Xcheck:jni will cause "FATAL ERROR in native 
> method: JNI received a class argument that is not a class" and a core dump 
> such as the following.
> (gdb) 
> #0 0x00007f02e4aef8a5 in raise () from /lib64/libc.so.6
> #1 0x00007f02e4af1085 in abort () from /lib64/libc.so.6
> #2 0x00007f02e45bd727 in os::abort(bool) () from 
> /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #3 0x00007f02e43cec63 in jniCheck::validate_class(JavaThread*, _jclass*, 
> bool) () from /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #4 0x00007f02e43ea669 in checked_jni_GetStaticObjectField () from 
> /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #5 0x00007f02d38eaf79 in 
> Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_deflateBytesDirect () 
> from /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> In addition, that clazz object is only used for synchronization. In the case 
> of the native method _deflateBytesDirect, the result is a class wide lock 
> used to access the instance field uncompressed_direct_buf. Perhaps using the 
> instance as the sync point is more appropriate?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to