[ https://issues.apache.org/jira/browse/HADOOP-4129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12630086#action_12630086 ]
Vinod K V commented on HADOOP-4129: ----------------------------------- The least count of the memory needs of TT and tasks will be kB or perhaps even mB. We don't need to track byte level memory sizes. Further, as Hemanth pointed out, this is consistent with the fact that ulimits are also mentioned in kB. So, I'll leave the computations to be in kB. Discussed with Arun about the ability to mention memory with GB/MB/KB suffixes in config files. Yes, it seems to be a good to have, which can prevent the problems like the ones that occured with the test case. I'll create another JIRA and mark it for 0.19. > Memory limits of TaskTracker and Tasks should be in kiloBytes. > -------------------------------------------------------------- > > Key: HADOOP-4129 > URL: https://issues.apache.org/jira/browse/HADOOP-4129 > Project: Hadoop Core > Issue Type: Bug > Components: mapred > Reporter: Vinod K V > Assignee: Vinod K V > Priority: Blocker > Attachments: HADOOP-4129 > > > HADOOP-3759 uses memory limits in kilo-bytes and HADOOP-3581 changed it to be > in bytes. Because of this, TestHighRAMJobs is failing on Linux. We should > change this behaviour so that all memory limits are considered to be in > kilo-bytes. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.