[
https://issues.apache.org/jira/browse/MAPREDUCE-2264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13009708#comment-13009708
]
Devaraj K commented on MAPREDUCE-2264:
--------------------------------------
Hi Ravi,
I am also facing the same problem. Reduce tasks progress is going more than
100% when the reduce input size is high.
{code:title=Merger.java|borderStyle=solid}
private void adjustPriorityQueue(Segment<K, V> reader) throws IOException{
long startPos = reader.getPosition();
boolean hasNext = reader.next();
long endPos = reader.getPosition();
totalBytesProcessed += endPos - startPos;
mergeProgress.set(totalBytesProcessed * progPerByte);
if (hasNext) {
adjustTop();
} else {
pop();
reader.close();
}{code}
Here the totalBytesProcessed is becoming more than the totalBytes and
(totalBytesProcessed * progPerByte) is coming more than 1.
I am using the 0.20.2 version and I have HADOOP-5210 ,HADOOP-5572 patches also
in the code. Please check the screenshot attached.
> Job status exceeds 100% in some cases
> --------------------------------------
>
> Key: MAPREDUCE-2264
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2264
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Components: jobtracker
> Reporter: Adam Kramer
>
> I'm looking now at my jobtracker's list of running reduce tasks. One of them
> is 120.05% complete, the other is 107.28% complete.
> I understand that these numbers are estimates, but there is no case in which
> an estimate of 100% for a non-complete task is better than an estimate of
> 99.99%, nor is there any case in which an estimate greater than 100% is valid.
> I suggest that whatever logic is computing these set 99.99% as a hard maximum.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira