[
https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17180603#comment-17180603
]
Masatake Iwasaki commented on HADOOP-17144:
-------------------------------------------
{noformat}
@@ -175,12 +175,13 @@ public void testDecompressorCompressAIOBException() {
public void
testSetInputWithBytesSizeMoreThenDefaultLz4CompressorByfferSize() {
int BYTES_SIZE = 1024 * 64 + 1;
try {
- Lz4Compressor compressor = new Lz4Compressor();
+ Lz4Compressor compressor = new Lz4Compressor(BYTES_SIZE);
{noformat}
The test name implies that the default constructor (Lz4Compressor()) must be
tested. This should be passed without fix on test code side.
{noformat}
@@ -73,7 +73,7 @@ JNIEXPORT jint JNICALL
Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_comp
return (jint)0;
}
- compressed_direct_buf_len = LZ4_compress(uncompressed_bytes,
compressed_bytes, uncompressed_direct_buf_len);
+ compressed_direct_buf_len = LZ4_compress_default(uncompressed_bytes,
compressed_bytes, uncompressed_direct_buf_len,
LZ4_compressBound(uncompressed_direct_buf_len));
{noformat}
Actual capacity of destination buffer must be given rather than calling
LZ4_compressBound? Same can be applied to other invocation of LZ4_compress_*.
> Update Hadoop's lz4 to v1.9.2
> -----------------------------
>
> Key: HADOOP-17144
> URL: https://issues.apache.org/jira/browse/HADOOP-17144
> Project: Hadoop Common
> Issue Type: Improvement
> Reporter: Hemanth Boyina
> Assignee: Hemanth Boyina
> Priority: Major
> Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch,
> HADOOP-17144.003.patch
>
>
> Update hadoop's native lz4 to v1.9.2
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]