[ 
https://issues.apache.org/jira/browse/HBASE-3038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12914846#action_12914846
 ] 

Kannan Muthukkaruppan commented on HBASE-3038:
----------------------------------------------

Excellent catch Nicolas!

All: just as an fyi, this is the exception/stack you'll run into because of 
this issue on large files in recovered.edits:

{code}
2010-09-22 16:05:43,939 INFO org.apache.hadoop.hbase.regionserver.HRegion: 
Replaying edits from 
hdfs://<xyz>:9000/HBASE/test_table/ce0cd6e5793564a4b1a75de83232701b/recovered.edits/0000000000020477687;
 minSeqId=20484537
2010-09-22 16:06:02,475 ERROR 
org.apache.hadoop.hbase.regionserver.HRegionServer: Error opening 
test_table,5dddddd8,1283714332727.ce0cd6e5793564a4b1a75de83232701b.
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:375)
at 
org.apache.hadoop.io.SequenceFile$Reader.readRecordLength(SequenceFile.java:1953)
at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1983)
at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1888)
at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1934)
at 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:121)
at 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:113)
at 
org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1982)
at 
org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1957)
at 
org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1915)
at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:344)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1479)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1426)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1334)
at java.lang.Thread.run(Thread.java:619)
{code}


> WALReaderFSDataInputStream.getPos() fails if Filesize > MAX_INT
> ---------------------------------------------------------------
>
>                 Key: HBASE-3038
>                 URL: https://issues.apache.org/jira/browse/HBASE-3038
>             Project: HBase
>          Issue Type: Bug
>          Components: regionserver
>    Affects Versions: 0.89.20100621, 0.90.0
>            Reporter: Nicolas Spiegelberg
>            Assignee: Nicolas Spiegelberg
>            Priority: Critical
>             Fix For: 0.89.20100924, 0.90.0
>
>
> WALReaderFSDataInputStream.getPos() uses  this.in.available() to determine 
> the actual length of the file.  Except that available() returns an int 
> instead of a long.  Therefore, our current logic is broke when trying to read 
> a split log > 2GB.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to