[ 
https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17184169#comment-17184169
 ] 

Hemanth Boyina commented on HADOOP-17144:
-----------------------------------------

thanks for the review [~iwasakims]
{quote}Since user of Lz4Decompressor provides already compressed data as input, 
we do not need to expand the internal buffer?
{quote}
user's compressed data input will be first kept in userBuf and in 
setInputFromSavedData userBuf will be put in compressedDirectBuf , as the 
compressed data length could be greater than source length  we need to  
expanded the buffer
{quote}This looks incorrect since the userBufLen could be greater than 
directBufferSize.
{quote}
In LZ4 decompressor  the userBufLen is nothing but the compressed data length , 
so as the compressed data length could be greater than source length , we need 
to set  compressedDirectBufLen with userBufLen 

LZ4 compression states that  Compression is guaranteed to succeed if 
'dstCapacity' >= LZ4_compressBound(srcSize)
{quote}cc and whitespace warnings should be addressed too

updating the calculation of the maxlength
{quote}
will update on the next patch

 

 

 

> Update Hadoop's lz4 to v1.9.2
> -----------------------------
>
>                 Key: HADOOP-17144
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17144
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Hemanth Boyina
>            Assignee: Hemanth Boyina
>            Priority: Major
>         Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, 
> HADOOP-17144.003.patch, HADOOP-17144.004.patch
>
>
> Update hadoop's native lz4 to v1.9.2 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to