[
https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17195051#comment-17195051
]
Hemanth Boyina commented on HADOOP-17144:
-----------------------------------------
thanks for the comment [~iwasakims] , sorry for late response
{quote}Adding a test case similar to
TestLz4CompressorDecompressor#testSetInputWithBytesSizeMoreThenDefaultLz4CompressorByfferSize
for decompressor would make the point clear
{quote}
we do have a test case similar to this scenario in
TestCompressorDecompressor#testCompressorDecompressorWithExeedBufferLimit ,
modified the lz4 constructors to use default buffer size , the compressor
worked the same way as you have mentioned but decompressor didnt work the same
as the lz4 decompressor api returned negative value for this scenario which is
incorrect
please correct me if i am missing something here
> Update Hadoop's lz4 to v1.9.2
> -----------------------------
>
> Key: HADOOP-17144
> URL: https://issues.apache.org/jira/browse/HADOOP-17144
> Project: Hadoop Common
> Issue Type: Improvement
> Reporter: Hemanth Boyina
> Assignee: Hemanth Boyina
> Priority: Major
> Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch,
> HADOOP-17144.003.patch, HADOOP-17144.004.patch
>
>
> Update hadoop's native lz4 to v1.9.2
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]