[
https://issues.apache.org/jira/browse/HADOOP-17292?focusedWorklogId=498683&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-498683
]
ASF GitHub Bot logged work on HADOOP-17292:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 09/Oct/20 17:52
Start Date: 09/Oct/20 17:52
Worklog Time Spent: 10m
Work Description: sunchao commented on a change in pull request #2350:
URL: https://github.com/apache/hadoop/pull/2350#discussion_r502564166
##########
File path:
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/Lz4Compressor.java
##########
@@ -76,6 +64,19 @@ public Lz4Compressor(int directBufferSize, boolean useLz4HC)
{
this.useLz4HC = useLz4HC;
this.directBufferSize = directBufferSize;
+ try {
+ LZ4Factory lz4Factory = LZ4Factory.fastestInstance();
+ if (useLz4HC) {
+ lz4Compressor = lz4Factory.highCompressor();
Review comment:
The library also allow configuring the compression level, which perhaps
we can add a Hadoop option to enable that later. This just use the default
compression level.
##########
File path:
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/Lz4Compressor.java
##########
@@ -76,6 +64,19 @@ public Lz4Compressor(int directBufferSize, boolean useLz4HC)
{
this.useLz4HC = useLz4HC;
this.directBufferSize = directBufferSize;
+ try {
+ LZ4Factory lz4Factory = LZ4Factory.fastestInstance();
+ if (useLz4HC) {
Review comment:
nit: seems we no longer need the field `useLz4HC` with this.
##########
File path:
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.java
##########
@@ -272,7 +269,19 @@ public synchronized void end() {
// do nothing
}
- private native static void initIDs();
-
- private native int decompressBytesDirect();
+ private int decompressDirectBuf() {
+ if (compressedDirectBufLen == 0) {
Review comment:
I don't think this will ever happen but it's not a big deal.
##########
File path:
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.java
##########
@@ -272,7 +269,19 @@ public synchronized void end() {
// do nothing
}
- private native static void initIDs();
-
- private native int decompressBytesDirect();
+ private int decompressDirectBuf() {
+ if (compressedDirectBufLen == 0) {
+ return 0;
+ } else {
+ // Set the position and limit of `compressedDirectBuf` for reading
+ compressedDirectBuf.limit(compressedDirectBufLen).position(0);
+ lz4Decompressor.decompress((ByteBuffer) compressedDirectBuf,
+ (ByteBuffer) uncompressedDirectBuf);
+ compressedDirectBufLen = 0;
+ compressedDirectBuf.limit(compressedDirectBuf.capacity()).position(0);
Review comment:
you can just call `clear`?
##########
File path:
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/pom.xml
##########
@@ -71,6 +71,11 @@
<artifactId>assertj-core</artifactId>
<scope>test</scope>
</dependency>
+ <dependency>
Review comment:
why is this needed?
##########
File path:
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/Lz4Compressor.java
##########
@@ -302,11 +303,20 @@ public synchronized long getBytesWritten() {
public synchronized void end() {
}
- private native static void initIDs();
-
- private native int compressBytesDirect();
-
- private native int compressBytesDirectHC();
-
- public native static String getLibraryName();
+ private int compressDirectBuf() {
Review comment:
seems some of the methods in this class look exactly the same as in
`SnappyCompressor` - perhaps we can do some refactoring later.
##########
File path:
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/Lz4Compressor.java
##########
@@ -302,11 +303,20 @@ public synchronized long getBytesWritten() {
public synchronized void end() {
}
- private native static void initIDs();
-
- private native int compressBytesDirect();
-
- private native int compressBytesDirectHC();
-
- public native static String getLibraryName();
+ private int compressDirectBuf() {
+ if (uncompressedDirectBufLen == 0) {
+ return 0;
+ } else {
+ // Set the position and limit of `uncompressedDirectBuf` for reading
+ uncompressedDirectBuf.limit(uncompressedDirectBufLen).position(0);
+ compressedDirectBuf.clear();
Review comment:
I think this isn't necessary since it's called right before the call
site?
##########
File path:
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.java
##########
@@ -272,7 +269,19 @@ public synchronized void end() {
// do nothing
}
- private native static void initIDs();
-
- private native int decompressBytesDirect();
+ private int decompressDirectBuf() {
+ if (compressedDirectBufLen == 0) {
+ return 0;
+ } else {
+ // Set the position and limit of `compressedDirectBuf` for reading
Review comment:
nit: this comment doesn't add much value - it just state what is exactly
being done in the code.
##########
File path:
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/lz4/TestLz4CompressorDecompressor.java
##########
@@ -330,4 +328,33 @@ public void doWork() throws Exception {
ctx.waitFor(60000);
}
+
+ @Test
+ public void testLz4Compatibility() throws Exception {
+ Path filePath = new Path(TestLz4CompressorDecompressor.class
Review comment:
nit: perhaps some comments on this - not quite sure what it is testing.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 498683)
Time Spent: 2h 50m (was: 2h 40m)
> Using lz4-java in Lz4Codec
> --------------------------
>
> Key: HADOOP-17292
> URL: https://issues.apache.org/jira/browse/HADOOP-17292
> Project: Hadoop Common
> Issue Type: New Feature
> Components: common
> Affects Versions: 3.3.0
> Reporter: L. C. Hsieh
> Priority: Major
> Labels: pull-request-available
> Time Spent: 2h 50m
> Remaining Estimate: 0h
>
> In Hadoop, we use native libs for lz4 codec which has several disadvantages:
> It requires native libhadoop to be installed in system LD_LIBRARY_PATH, and
> they have to be installed separately on each node of the clusters, container
> images, or local test environments which adds huge complexities from
> deployment point of view. In some environments, it requires compiling the
> natives from sources which is non-trivial. Also, this approach is platform
> dependent; the binary may not work in different platform, so it requires
> recompilation.
> It requires extra configuration of java.library.path to load the natives, and
> it results higher application deployment and maintenance cost for users.
> Projects such as Spark use [lz4-java|https://github.com/lz4/lz4-java] which
> is JNI-based implementation. It contains native binaries in jar file, and it
> can automatically load the native binaries into JVM from jar without any
> setup. If a native implementation can not be found for a platform, it can
> fallback to pure-java implementation of lz4.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]