sunchao commented on a change in pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#discussion_r495158362



##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyCompressor.java
##########
@@ -291,9 +283,17 @@ public long getBytesWritten() {
   public void end() {
   }
 
-  private native static void initIDs();
-
-  private native int compressBytesDirect();
-
-  public native static String getLibraryName();
+  private int compressDirectBuf() throws IOException {
+    if (uncompressedDirectBufLen == 0) {
+      return 0;
+    } else {
+      // Set the position and limit of `uncompressedDirectBuf` for reading
+      uncompressedDirectBuf.limit(uncompressedDirectBufLen).position(0);
+      int size = Snappy.compress((ByteBuffer) uncompressedDirectBuf,
+              (ByteBuffer) compressedDirectBuf);
+      uncompressedDirectBufLen = 0;
+      
uncompressedDirectBuf.limit(uncompressedDirectBuf.capacity()).position(0);

Review comment:
       nit: this seems unnecessary as `clear` is called shortly after at the 
call site? 

##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.java
##########
@@ -276,10 +268,20 @@ public void end() {
     // do nothing
   }
 
-  private native static void initIDs();
+  private int decompressDirectBuf() throws IOException {
+    if (compressedDirectBufLen == 0) {
+      return 0;
+    } else {
+      // Set the position and limit of `compressedDirectBuf` for reading
+      compressedDirectBuf.limit(compressedDirectBufLen).position(0);
+      int size = Snappy.uncompress((ByteBuffer) compressedDirectBuf,
+              (ByteBuffer) uncompressedDirectBuf);
+      compressedDirectBufLen = 0;
+      compressedDirectBuf.limit(compressedDirectBuf.capacity()).position(0);

Review comment:
       nit: can we just call `compressedDirectBuf.clear()`?

##########
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/snappy/TestSnappyCompressorDecompressor.java
##########
@@ -446,4 +442,49 @@ public void doWork() throws Exception {
 
     ctx.waitFor(60000);
   }
+
+  @Test
+  public void testSnappyCompatibility() throws Exception {
+    // HADOOP-17125. Using snappy-java in SnappyCodec. These strings are raw 
data and compressed data
+    // using previous native Snappy codec. We use updated Snappy codec to 
decode it and check if it
+    // matches.
+    String rawData = 
"010a06030a040a0c0109020c0a010204020d02000b010701080605080b090902060a08050206" +
+            
"0a0d06070908080a0c0105030904090d05090800040c090c0d0d0804000d00040b0b0d010d060907020a0"
 +
+            
"30a0c0900040905080107040d0c01060a0b09070a04000b01040b09000e0e00020b06050b060e030e0a07"
 +
+            "050d06050d";
+    String compressed = 
"8001f07f010a06030a040a0c0109020c0a010204020d02000b010701080605080b0909020" +
+            
"60a080502060a0d06070908080a0c0105030904090d05090800040c090c0d0d0804000d00040b0b0d010d"
 +
+            
"060907020a030a0c0900040905080107040d0c01060a0b09070a04000b01040b09000e0e00020b06050b0"
 +
+            "60e030e0a07050d06050d";
+
+    byte[] rawDataBytes = Hex.decodeHex(rawData);
+    byte[] compressedBytes = Hex.decodeHex(compressed);
+
+    ByteBuffer inBuf = ByteBuffer.allocateDirect(compressedBytes.length);
+    inBuf.put(compressedBytes, 0, compressedBytes.length);
+    inBuf.flip();
+
+    ByteBuffer outBuf = ByteBuffer.allocateDirect(rawDataBytes.length);
+    ByteBuffer expected = ByteBuffer.wrap(rawDataBytes);
+
+    SnappyDecompressor.SnappyDirectDecompressor decompressor = new 
SnappyDecompressor.SnappyDirectDecompressor();

Review comment:
       nit: long lines (80 chars).

##########
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/CompressDecompressTester.java
##########
@@ -495,19 +479,16 @@ public String getName() {
     Compressor compressor = pair.compressor;
 
     if (compressor.getClass().isAssignableFrom(Lz4Compressor.class)
-            && (NativeCodeLoader.isNativeCodeLoaded()))

Review comment:
       nit: unrelated changes :)

##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.java
##########
@@ -45,30 +46,21 @@
   private int userBufOff = 0, userBufLen = 0;
   private boolean finished;
 
-  private static boolean nativeSnappyLoaded = false;
-
-  static {
-    if (NativeCodeLoader.isNativeCodeLoaded() &&
-        NativeCodeLoader.buildSupportsSnappy()) {
-      try {
-        initIDs();
-        nativeSnappyLoaded = true;
-      } catch (Throwable t) {
-        LOG.error("failed to load SnappyDecompressor", t);
-      }
-    }
-  }
-  
-  public static boolean isNativeCodeLoaded() {
-    return nativeSnappyLoaded;
-  }
-  
   /**
    * Creates a new compressor.
    *
    * @param directBufferSize size of the direct buffer to be used.
    */
   public SnappyDecompressor(int directBufferSize) {
+    // `snappy-java` is provided scope. We need to check if it is available.
+    try {
+      SnappyLoader.getVersion();

Review comment:
       nit: `SnappyLoader` is marked as "internal use-only" though so not sure 
if there is better alternative here.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to