[ 
https://issues.apache.org/jira/browse/MAPREDUCE-15?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14993792#comment-14993792
 ] 

Andrew Olson commented on MAPREDUCE-15:
---------------------------------------

We encountered the stack trace in this issue's description a few days ago. The 
SequenceFile "corruption" (unreadability) happens because of an integer math 
overflow [1], if the BytesWritable size is > Integer.MAX_VALUE / 3 (about 
682MB). Here is [2] a stackoverflow discussion about this.

[1] 
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/BytesWritable.java#L123
[2] 
http://stackoverflow.com/questions/24127304/negativearraysizeexception-when-creating-a-sequencefile-with-large-1gb-bytesw

> SequenceFile RecordReader should skip bad records
> -------------------------------------------------
>
>                 Key: MAPREDUCE-15
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-15
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>            Reporter: Joydeep Sen Sarma
>
> Currently a bad record in a sequencefile leads to entire job being failed. 
> the best workaround is to skip an errant file manually (by looking at what 
> map task failed).  This is a sucky option because it's manual and because one 
> should be able to skip a sequencefile block (instead of entire file).
> While we don't see this often (and i don't know why this corruption happened) 
> - here's an example stack:
> Status : FAILED java.lang.NegativeArraySizeException
>       at org.apache.hadoop.io.BytesWritable.setCapacity(BytesWritable.java:96)
>       at org.apache.hadoop.io.BytesWritable.setSize(BytesWritable.java:75)
>       at org.apache.hadoop.io.BytesWritable.readFields(BytesWritable.java:130)
>       at 
> org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:1640)
>       at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1712)
>       at 
> org.apache.hadoop.mapred.SequenceFileRecordReader.next(SequenceFileRecordReader.java:79)
>       at org.apache.hadoop.mapred.MapTask$1.next(MapTask.java:176)
> Ideally the recordreader should just skip the entire chunk if it gets an 
> unrecoverable error while reading.
> This was the consensus in hadoop-153 as well (that data corruptions should be 
> handled by recordreaders) and hadoop-3144 did something similar for 
> textinputformat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to