[ 
https://issues.apache.org/jira/browse/HADOOP-4018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12628266#action_12628266
 ] 

dhruba borthakur commented on HADOOP-4018:
------------------------------------------

I think the goal is to avoid making the JobTracker allocate memory for tasks 
for a job that causes the limit to exceed a certain quantity. if the 
limit-check is in the scheduler, all schedulers have to implement this check. 
Also, by the time the scheduler gets to look at the task, the JT might have 
already allocated some memory for some task-related status.

I am assuming that it is better to check this in the JT rather than in the 
scheduler, but please let me know if my assumption seems invalid.

> limit memory usage in jobtracker
> --------------------------------
>
>                 Key: HADOOP-4018
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4018
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: maxSplits.patch, maxSplits2.patch, maxSplits3.patch, 
> maxSplits4.patch, maxSplits5.patch, maxSplits6.patch
>
>
> We have seen instances when a user submitted a job with many thousands of 
> mappers. The JobTracker was running with 3GB heap, but it was still not 
> enough to prevent memory trashing from Garbage collection; effectively the 
> Job Tracker was not able to serve jobs and had to be restarted.
> One simple proposal would be to limit the maximum number of tasks per job. 
> This can be a configurable parameter. Is there other things that eat huge 
> globs of memory in job Tracker?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to