[
https://issues.apache.org/jira/browse/HADOOP-12677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17744264#comment-17744264
]
Xiang Li commented on HADOOP-12677:
-----------------------------------
We exposed this problem when trying to read Spark executor log when it is
greater than 2G (2^31 - 1, as Integer.MAX_VALUE) and would like to try the
patch on our test environment. Thanks for the fix [~weichiu]!
> DecompressorStream throws IndexOutOfBoundsException when calling skip(long)
> ---------------------------------------------------------------------------
>
> Key: HADOOP-12677
> URL: https://issues.apache.org/jira/browse/HADOOP-12677
> Project: Hadoop Common
> Issue Type: Bug
> Components: io
> Affects Versions: 2.4.0, 2.6.0, 3.0.0-alpha1
> Reporter: Laurent Goujon
> Assignee: Wei-Chiu Chuang
> Priority: Major
> Attachments: HADOOP-12677.001.patch, HADOOP-12677.002.patch
>
>
> DecompressorStream.skip(long) throws an IndexOutOfBoundException when using a
> long bigger than Integer.MAX_VALUE
> This is because of this cast from long to int:
> https://github.com/apache/hadoop-common/blob/HADOOP-3628/src/core/org/apache/hadoop/io/compress/DecompressorStream.java#L125
> The fix is probably to do the cast after applying Math.min: in that case, it
> should not be an issue since it should not be bigger than the buffer size
> (512)
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]