[
https://issues.apache.org/jira/browse/MAPREDUCE-2018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12899652#action_12899652
]
Luke Lu commented on MAPREDUCE-2018:
------------------------------------
Looks good to me.
> TeraSort example fails in trunk
> -------------------------------
>
> Key: MAPREDUCE-2018
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2018
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Components: examples
> Affects Versions: 0.22.0
> Environment: Compile, build and run from trunk terasort example
> using several random files as input. Terasort will fail
> Reporter: Krishna Ramachandran
> Attachments: mapred-2018.patch
>
>
> Exceptions are thrown while computing splits near the end of file - typically
> when the number of bytes read is smaller than RECORD_LENGTH
> 10/08/17 22:44:17 WARN conf.Configuration: mapred.task.id is deprecated.
> Instead, use mapreduce.task.attempt.id
> 10/08/17 22:44:17 INFO input.FileInputFormat: Total input paths to process : 1
> Spent 19ms computing base-splits.
> Spent 2ms computing TeraScheduler splits.
> Computing input splits took 22ms
> Sampling 1 splits of 1
> Got an exception while reading splits java.io.EOFException: read past eof
> at
> org.apache.hadoop.examples.terasort.TeraInputFormat$TeraRecordReader.nextKeyValue(TeraInputFormat.java:267)
> at
> org.apache.hadoop.examples.terasort.TeraInputFormat$1.run(TeraInputFormat.java:181)
> TeraInoutFormat I believe assumes the file sizes are exact multiples of
> RECORD_LENGTH
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.