[ https://issues.apache.org/jira/browse/MAPREDUCE-2264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13567213#comment-13567213 ]
Sandy Ryza commented on MAPREDUCE-2264: --------------------------------------- Thanks for the comments, Arun. 1. Agreed that Segment has too many constructors. While the patch adds 2, there were 5 already, so the existing pattern was just being followed. What would be the best way to fix this? There's also some confusing formatting in some of the old ones that could be improved. 2. Agreed. 3. From what I can tell, this is what is already done the Segment constructor. localFS.getFileStatus(file).getLen() is called again/separately on the path to calculate the onDiskBytes. We could save a call to getFileStatus constructing the Segment first and calculating the onDiskBytes after. (For reference, I am looking at finalMerge). This issue existed before the patch as well. 4. What is the advantage of a create() method over a constructor? With the patch working in its current incarnation, would it be better to file a new JIRA for the cleanups or do it in this one? > Job status exceeds 100% in some cases > -------------------------------------- > > Key: MAPREDUCE-2264 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-2264 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: jobtracker > Affects Versions: 0.20.2, 0.20.205.0 > Reporter: Adam Kramer > Assignee: Devaraj K > Labels: critical-0.22.0 > Fix For: 1.2.0, 2.0.3-alpha > > Attachments: MAPREDUCE-2264-0.20.205-1.patch, > MAPREDUCE-2264-0.20.205.patch, MAPREDUCE-2264-0.20.3.patch, > MAPREDUCE-2264-branch-1-1.patch, MAPREDUCE-2264-branch-1-2.patch, > MAPREDUCE-2264-branch-1.patch, MAPREDUCE-2264-trunk-1.patch, > MAPREDUCE-2264-trunk-1.patch, MAPREDUCE-2264-trunk-2.patch, > MAPREDUCE-2264-trunk-3.patch, MAPREDUCE-2264-trunk-4.patch, > MAPREDUCE-2264-trunk-5.patch, MAPREDUCE-2264-trunk-5.patch, > MAPREDUCE-2264-trunk.patch, more than 100%.bmp > > > I'm looking now at my jobtracker's list of running reduce tasks. One of them > is 120.05% complete, the other is 107.28% complete. > I understand that these numbers are estimates, but there is no case in which > an estimate of 100% for a non-complete task is better than an estimate of > 99.99%, nor is there any case in which an estimate greater than 100% is valid. > I suggest that whatever logic is computing these set 99.99% as a hard maximum. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira