[
https://issues.apache.org/jira/browse/HADOOP-17125?focusedWorklogId=490911&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-490911
]
ASF GitHub Bot logged work on HADOOP-17125:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 25/Sep/20 13:25
Start Date: 25/Sep/20 13:25
Worklog Time Spent: 10m
Work Description: viirya commented on a change in pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#discussion_r494069586
##########
File path:
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/snappy/TestSnappyCompressorDecompressor.java
##########
@@ -446,4 +442,43 @@ public void doWork() throws Exception {
ctx.waitFor(60000);
}
+
+ @Test
+ public void testSnappyCompatibility() throws Exception {
+ // HADOOP-17125. Using snappy-java in SnappyCodec. These strings are raw
data and compressed data
+ // using previous native Snappy codec. We use updated Snappy codec to
decode it and check if it
+ // matches.
+ String rawData =
"010a06030a040a0c0109020c0a010204020d02000b010701080605080b090902060a080502060a0d06070908080a0c0105030904090d05090800040c090c0d0d0804000d00040b0b0d010d060907020a030a0c0900040905080107040d0c01060a0b09070a04000b01040b09000e0e00020b06050b060e030e0a07050d06050d";
Review comment:
String is to make the test as simple as possible. Maybe further shorten
the string?
##########
File path:
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyCompressor.java
##########
@@ -291,9 +282,17 @@ public long getBytesWritten() {
public void end() {
}
- private native static void initIDs();
-
- private native int compressBytesDirect();
-
- public native static String getLibraryName();
+ private int compressBytesDirect() throws IOException {
Review comment:
This `compressBytesDirect` and `decompressBytesDirect` basically are
copied from original method names. `compressDirectBuf` and
`decompressDirectBuf` looks good to me.
##########
File path:
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyCompressor.java
##########
@@ -48,30 +49,20 @@
private long bytesRead = 0L;
private long bytesWritten = 0L;
- private static boolean nativeSnappyLoaded = false;
-
- static {
- if (NativeCodeLoader.isNativeCodeLoaded() &&
- NativeCodeLoader.buildSupportsSnappy()) {
- try {
- initIDs();
- nativeSnappyLoaded = true;
- } catch (Throwable t) {
- LOG.error("failed to load SnappyCompressor", t);
- }
- }
- }
-
- public static boolean isNativeCodeLoaded() {
- return nativeSnappyLoaded;
- }
-
/**
* Creates a new compressor.
*
* @param directBufferSize size of the direct buffer to be used.
*/
public SnappyCompressor(int directBufferSize) {
+ // `snappy-java` is provided scope. We need to check if its availability.
+ try {
+ SnappyLoader.getVersion();
+ } catch (Throwable t) {
+ throw new RuntimeException("native snappy library not available: " +
Review comment:
It is java-snappy jar, yeah, I will review the message.
##########
File path:
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyCompressor.java
##########
@@ -48,30 +49,20 @@
private long bytesRead = 0L;
private long bytesWritten = 0L;
- private static boolean nativeSnappyLoaded = false;
-
- static {
- if (NativeCodeLoader.isNativeCodeLoaded() &&
- NativeCodeLoader.buildSupportsSnappy()) {
- try {
- initIDs();
- nativeSnappyLoaded = true;
- } catch (Throwable t) {
- LOG.error("failed to load SnappyCompressor", t);
- }
- }
- }
-
- public static boolean isNativeCodeLoaded() {
- return nativeSnappyLoaded;
- }
-
/**
* Creates a new compressor.
*
* @param directBufferSize size of the direct buffer to be used.
*/
public SnappyCompressor(int directBufferSize) {
+ // `snappy-java` is provided scope. We need to check if its availability.
Review comment:
Oops, thanks.
##########
File path:
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyCompressor.java
##########
@@ -48,30 +49,20 @@
private long bytesRead = 0L;
private long bytesWritten = 0L;
- private static boolean nativeSnappyLoaded = false;
-
- static {
- if (NativeCodeLoader.isNativeCodeLoaded() &&
- NativeCodeLoader.buildSupportsSnappy()) {
- try {
- initIDs();
- nativeSnappyLoaded = true;
- } catch (Throwable t) {
- LOG.error("failed to load SnappyCompressor", t);
- }
- }
- }
-
- public static boolean isNativeCodeLoaded() {
- return nativeSnappyLoaded;
- }
-
/**
* Creates a new compressor.
*
* @param directBufferSize size of the direct buffer to be used.
*/
public SnappyCompressor(int directBufferSize) {
+ // `snappy-java` is provided scope. We need to check if its availability.
+ try {
+ SnappyLoader.getVersion();
+ } catch (Throwable t) {
+ throw new RuntimeException("native snappy library not available: " +
Review comment:
It is java-snappy jar, yeah, I will revise the message.
##########
File path:
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/snappy/TestSnappyCompressorDecompressor.java
##########
@@ -446,4 +442,43 @@ public void doWork() throws Exception {
ctx.waitFor(60000);
}
+
+ @Test
+ public void testSnappyCompatibility() throws Exception {
+ // HADOOP-17125. Using snappy-java in SnappyCodec. These strings are raw
data and compressed data
+ // using previous native Snappy codec. We use updated Snappy codec to
decode it and check if it
+ // matches.
+ String rawData =
"010a06030a040a0c0109020c0a010204020d02000b010701080605080b090902060a080502060a0d06070908080a0c0105030904090d05090800040c090c0d0d0804000d00040b0b0d010d060907020a030a0c0900040905080107040d0c01060a0b09070a04000b01040b09000e0e00020b06050b060e030e0a07050d06050d";
Review comment:
Ok, I split the long string. Thanks.
##########
File path:
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/CompressDecompressTester.java
##########
@@ -432,7 +412,11 @@ public void assertCompression(String name, Compressor
compressor,
joiner.join(name, "byte arrays not equals error !!!"),
originalRawData, decompressOut.toByteArray());
} catch (Exception ex) {
- fail(joiner.join(name, ex.getMessage()));
+ if (ex.getMessage() != null) {
+ fail(joiner.join(name, ex.getMessage()));
+ } else {
+ fail(joiner.join(name, ExceptionUtils.getStackTrace(ex)));
Review comment:
When I first took over this change, the test failed with NPE without any
details. It is because the exception thrown returns null from `getMessage()`.
`joiner.join(name, null)` causes the NPE, so I changed it to print stack trace
once `getMessage()` returns null. It's better for debugging.
##########
File path:
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyCompressor.java
##########
@@ -291,9 +283,17 @@ public long getBytesWritten() {
public void end() {
}
- private native static void initIDs();
-
- private native int compressBytesDirect();
-
- public native static String getLibraryName();
+ private int compressDirectBuf() throws IOException {
+ if (uncompressedDirectBufLen == 0) {
+ return 0;
+ } else {
+ // Set the position and limit of `uncompressedDirectBuf` for reading
+ uncompressedDirectBuf.limit(uncompressedDirectBufLen).position(0);
+ int size = Snappy.compress((ByteBuffer) uncompressedDirectBuf,
+ (ByteBuffer) compressedDirectBuf);
+ uncompressedDirectBufLen = 0;
+ uncompressedDirectBuf.limit(directBufferSize).position(0);
Review comment:
done. thanks.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 490911)
Time Spent: 17h 20m (was: 17h 10m)
> Using snappy-java in SnappyCodec
> --------------------------------
>
> Key: HADOOP-17125
> URL: https://issues.apache.org/jira/browse/HADOOP-17125
> Project: Hadoop Common
> Issue Type: New Feature
> Components: common
> Affects Versions: 3.3.0
> Reporter: DB Tsai
> Priority: Major
> Labels: pull-request-available
> Time Spent: 17h 20m
> Remaining Estimate: 0h
>
> In Hadoop, we use native libs for snappy codec which has several
> disadvantages:
> * It requires native *libhadoop* and *libsnappy* to be installed in system
> *LD_LIBRARY_PATH*, and they have to be installed separately on each node of
> the clusters, container images, or local test environments which adds huge
> complexities from deployment point of view. In some environments, it requires
> compiling the natives from sources which is non-trivial. Also, this approach
> is platform dependent; the binary may not work in different platform, so it
> requires recompilation.
> * It requires extra configuration of *java.library.path* to load the
> natives, and it results higher application deployment and maintenance cost
> for users.
> Projects such as *Spark* and *Parquet* use
> [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based
> implementation. It contains native binaries for Linux, Mac, and IBM in jar
> file, and it can automatically load the native binaries into JVM from jar
> without any setup. If a native implementation can not be found for a
> platform, it can fallback to pure-java implementation of snappy based on
> [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]].
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]