[
https://issues.apache.org/jira/browse/HBASE-3038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
stack updated HBASE-3038:
-------------------------
Attachment: 3038-addendum.txt
Had to add this to make it work on other hadoops (smile)... via J-D and Nicolas.
> WALReaderFSDataInputStream.getPos() fails if Filesize > MAX_INT
> ---------------------------------------------------------------
>
> Key: HBASE-3038
> URL: https://issues.apache.org/jira/browse/HBASE-3038
> Project: HBase
> Issue Type: Bug
> Components: regionserver
> Affects Versions: 0.89.20100621, 0.90.0
> Reporter: Nicolas Spiegelberg
> Assignee: Nicolas Spiegelberg
> Priority: Critical
> Fix For: 0.90.0
>
> Attachments: 3038-addendum.txt, HBASE-3038.patch
>
>
> WALReaderFSDataInputStream.getPos() uses this.in.available() to determine
> the actual length of the file. Except that available() returns an int
> instead of a long. Therefore, our current logic is broke when trying to read
> a split log > 2GB.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.