[ 
https://issues.apache.org/jira/browse/HADOOP-4018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dhruba borthakur updated HADOOP-4018:
-------------------------------------

    Attachment: maxSplits2.patch

This patch fails a job if the jobtracker detects that the number of splits for 
the job exceeds a configures upper limit. Please let me know of any gotchas 
that this patch might have.

I like the idea of having an upper limit of all tasks for the entire 
jobtracker. But I have not yet implemented it. Do you think I should implement 
it as part of this patch?

BTW, in our cluster, we have set mapred.jobtracker.completeuserjobs.maximum to 
20 already.

> limit memory usage in jobtracker
> --------------------------------
>
>                 Key: HADOOP-4018
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4018
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: maxSplits.patch, maxSplits2.patch
>
>
> We have seen instances when a user submitted a job with many thousands of 
> mappers. The JobTracker was running with 3GB heap, but it was still not 
> enough to prevent memory trashing from Garbage collection; effectively the 
> Job Tracker was not able to serve jobs and had to be restarted.
> One simple proposal would be to limit the maximum number of tasks per job. 
> This can be a configurable parameter. Is there other things that eat huge 
> globs of memory in job Tracker?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to