[
https://issues.apache.org/jira/browse/HADOOP-18199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
chaoli updated HADOOP-18199:
----------------------------
Description:
{{There is a bug when we use ZStandardCompressor which wrapped by
`BlockCompressorStream`. }}
The reason is the `bytesRead` in ZStandardCompressor implement is always zero,
which should be set to bytes length when call setInput.
Thus I makes the `rawWriteInt` always write zero
!image-2022-04-12-16-11-10-443.png!
was:
There is a bug when we use ZStandardCompressor which wrapped by
`BlockCompressorStream`.
The reason is the `bytesRead` in ZStandardCompressor implement is always zero,
which should be set to bytes length when call setInput.
Thus I makes the `rawWriteInt` always write zero
!image-2022-04-12-16-11-10-443.png!
> Fix ZStandardCompressor bytesRead meta when using BlockCompressorStream
> -----------------------------------------------------------------------
>
> Key: HADOOP-18199
> URL: https://issues.apache.org/jira/browse/HADOOP-18199
> Project: Hadoop Common
> Issue Type: Bug
> Components: common
> Affects Versions: 3.3.1, 3.3.2
> Reporter: chaoli
> Priority: Major
> Attachments: image-2022-04-12-16-11-10-443.png,
> image-2022-04-12-16-14-37-105.png
>
>
> {{There is a bug when we use ZStandardCompressor which wrapped by
> `BlockCompressorStream`. }}
> The reason is the `bytesRead` in ZStandardCompressor implement is always
> zero, which should be set to bytes length when call setInput.
> Thus I makes the `rawWriteInt` always write zero
> !image-2022-04-12-16-11-10-443.png!
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]