Dear HBase users and Devs, There are 2 TestHFileBlock.testGzipCompression unit tests failed when running HBase 0.94.0 with installing zlib and exporting LD_LIBRARY_PATH with hadoop lib/native/Linux-amd64-64. Did you met this UT failure before?
Failed tests: testGzipCompression[0](org.apache.hadoop.hbase.io.hfile.TestHFileBlock): expected:<...\s\xA0\x0F\x00\x00\x[AB\x85g\x91]> but was:<...\s\xA0\x0F\x00\x00\x[E1\x1C\x10\xE5]> testGzipCompression[1](org.apache.hadoop.hbase.io.hfile.TestHFileBlock): expected:<...\s\xA0\x0F\x00\x00\x[AB\x85g\x91]> but was:<...\s\xA0\x0F\x00\x00\x[E1\x1C\x10\xE5]> Ted Yu has also reported this with his comments additionally when he ran patches in HBASE-3857, his comment is "Ted Yu added a comment - 04/Aug/11 01:57 " in https://issues.apache.org/jira/browse/HBASE-3857, and Mikhail Bautin response it as following Green. Mikhail Bautin added a comment - 04/Aug/11 03:50 Addressing the issue with TestHFileBlock reported by Ted. It turns out there is an "OS" field inside the gzip header which might take different values depending on the OS and configuration. I have changed the unit test to always set that field to the same value before comparing. It says the patch of HBASE-3857 is fixed in 0.92.0, it is a hug patch against 0.92.0. 0.94.0 version is newer than 0.92.0, and I checked the patch code, the patch has basically been included in 0.94.0. However, from our test based on 0.94.0, the testGzipCompression unit tests are still failed. -------------------------------------------- *How to reproduce:* --------------------------------------------- 1.install zlib and do `make test` command to make sure it is installed well on your hbase unit tests running server*(x86_64, 64-bit)* #tar zvxf zlib-1.2.7.tar.gz #cd zlib-1.2.7 #./configure --prefix=/usr --shared #make #make test 2.copy hadoop lib/native/Linux-amd64-64(in case Jenkins server is x86-64) dir from hadoop-*.tar.gz into running hbase UT user permission dir, eg. /opt/jenkins/ dir 3.expert JNI path with running hbase UT user #export LD_LIBRARY_PATH=/opt/jenkins/Linux-amd64-64 Step 1,2,3 is used to resolved the "Deflater has been closed" issue with running hbase0.94.0 with Open JDK(like IBM JDK6, sr11). 4.after step 1~3, run HBase 0.94.0 (no matter SUN JDK or Open JDK) with "mvn test" or Junit execution in eclipse. Then we can met these TestHFileBlock.testGzipCompression unit tests failures. *The question is changed into : if only we export LD_LIBRARY_PATH to hadoop native lib(*.so) , hbase TestHFileBlock.testGzipCompression test cases will be failed. Is it a HBase defect with running with zlib and hadoop native? If YES, shall we open a JIRA for fixing and tracking this in next release?* *Here is the log of running org.apache.hadoop.hbase.io.hfile.TestHFileBlock: * /root/zhangliping/hbase/target/surefire-reports/org.apache.hadoop.hbase.io.hfile.TestHFileBlock.txt file: ------------------------------------------------------------------------------- Test set: org.apache.hadoop.hbase.io.hfile.TestHFileBlock ------------------------------------------------------------------------------- Tests run: 16, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 55.455 sec <<< FAILURE! testGzipCompression[0](org.apache.hadoop.hbase.io.hfile.TestHFileBlock) Time elapsed: 0.001 sec <<< FAILURE! org.junit.ComparisonFailure: expected:<...\s\xA0\x0F\x00\x00\x[AB\x85g\x91]> but was:<...\s\xA0\x0F\x00\x00\x[E1\x1C\x10\xE5]> at org.junit.Assert.assertEquals(Assert.java:125) at org.junit.Assert.assertEquals(Assert.java:147) at org.apache.hadoop.hbase.io.hfile.TestHFileBlock.testGzipCompression(TestHFileBlock.java:252) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) testGzipCompression[1](org.apache.hadoop.hbase.io.hfile.TestHFileBlock) Time elapsed: 0.027 sec <<< FAILURE! org.junit.ComparisonFailure: expected:<...\s\xA0\x0F\x00\x00\x[AB\x85g\x91]> but was:<...\s\xA0\x0F\x00\x00\x[E1\x1C\x10\xE5]> at org.junit.Assert.assertEquals(Assert.java:125) at org.junit.Assert.assertEquals(Assert.java:147) at org.apache.hadoop.hbase.io.hfile.TestHFileBlock.testGzipCompression(TestHFileBlock.java:252) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:611) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) /root/zhangliping/hbase/target/surefire-reports/org.apache.hadoop.hbase.io.hfile.TestHFileBlock-output.txt file ------------------------------------------------------------------------------------------------------------------------------- 2012-11-08 21:51:03,428 INFO [pool-1-thread-1] hbase.ResourceChecker(145): before io.hfile.TestHFileBlock#testBlockHeapSi ze[1]: 111 threads, 154 file descriptors 0 connections, 2012-11-08 21:51:03,448 DEBUG [pool-1-thread-1] util.ClassSize(229): 0 hb class [B 2012-11-08 21:51:03,449 DEBUG [pool-1-thread-1] util.ClassSize(229): 1 offset int 2012-11-08 21:51:03,449 DEBUG [pool-1-thread-1] util.ClassSize(229): 2 isReadOnly boolean 2012-11-08 21:51:03,449 DEBUG [pool-1-thread-1] util.ClassSize(229): 3 bigEndian boolean 2012-11-08 21:51:03,449 DEBUG [pool-1-thread-1] util.ClassSize(229): 4 nativeByteOrder boolean 2012-11-08 21:51:03,449 DEBUG [pool-1-thread-1] util.ClassSize(229): 5 mark int 2012-11-08 21:51:03,450 DEBUG [pool-1-thread-1] util.ClassSize(229): 6 position int 2012-11-08 21:51:03,450 DEBUG [pool-1-thread-1] util.ClassSize(229): 7 limit int 2012-11-08 21:51:03,450 DEBUG [pool-1-thread-1] util.ClassSize(229): 8 capacity int 2012-11-08 21:51:03,450 DEBUG [pool-1-thread-1] util.ClassSize(229): 9 address long 2012-11-08 21:51:03,450 DEBUG [pool-1-thread-1] util.ClassSize(256): Primitives=31, arrays=1, references(includes 2 for object overhe ad)=3, refSize 8, size=80, prealign_size=79 2012-11-08 21:51:03,451 DEBUG [pool-1-thread-1] util.ClassSize(229): 0 blockType class org.apache.hadoop.hbase.io.hfile.BlockType 2012-11-08 21:51:03,451 DEBUG [pool-1-thread-1] util.ClassSize(229): 1 onDiskSizeWithoutHeader int 2012-11-08 21:51:03,451 DEBUG [pool-1-thread-1] util.ClassSize(229): 2 uncompressedSizeWithoutHeader int 2012-11-08 21:51:03,451 DEBUG [pool-1-thread-1] util.ClassSize(229): 3 prevBlockOffset long 2012-11-08 21:51:03,451 DEBUG [pool-1-thread-1] util.ClassSize(229): 4 checksumType byte 2012-11-08 21:51:03,452 DEBUG [pool-1-thread-1] util.ClassSize(229): 5 bytesPerChecksum int 2012-11-08 21:51:03,452 DEBUG [pool-1-thread-1] util.ClassSize(229): 6 onDiskDataSizeWithHeader int 2012-11-08 21:51:03,452 DEBUG [pool-1-thread-1] util.ClassSize(229): 7 minorVersion int 2012-11-08 21:51:03,452 DEBUG [pool-1-thread-1] util.ClassSize(229): 8 buf class java.nio.ByteBuffer 2012-11-08 21:51:03,452 DEBUG [pool-1-thread-1] util.ClassSize(229): 9 includesMemstoreTS boolean 2012-11-08 21:51:03,452 DEBUG [pool-1-thread-1] util.ClassSize(229): 10 offset long 2012-11-08 21:51:03,453 DEBUG [pool-1-thread-1] util.ClassSize(229): 11 nextBlockOnDiskSizeWithHeader int 2012-11-08 21:51:03,453 DEBUG [pool-1-thread-1] util.ClassSize(229): 12 cfName class java.lang.String 2012-11-08 21:51:03,453 DEBUG [pool-1-thread-1] util.ClassSize(229): 13 tableName class java.lang.String 2012-11-08 21:51:03,453 DEBUG [pool-1-thread-1] util.ClassSize(229): 14 schemaMetrics class org.apache.hadoop.hbase.regionserver.metr ics.SchemaMetrics 2012-11-08 21:51:03,454 DEBUG [pool-1-thread-1] util.ClassSize(256): Primitives=42, arrays=0, references(includes 2 for ob ject overhead)=7, refSize 8, size=104, prealign_size=