[ 
https://issues.apache.org/jira/browse/HADOOP-3925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12620983#action_12620983
 ] 

dhruba borthakur commented on HADOOP-3925:
------------------------------------------

One of our users submitted a job that has a million mappers and million 
reducers. The JobTracker was runnign with 3GB heap. It went into 100% CPU usage 
(probably GC). Never came back to life even after 10 minutes. Is there a way 
(in the current release) to prevent this from happening?

> Configuration paramater to set the maximum number of mappers/reducers for a 
> job
> -------------------------------------------------------------------------------
>
>                 Key: HADOOP-3925
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3925
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: dhruba borthakur
>
> The JobTracker can be prone to a denial-of-service attack if a user submits a 
> job that has a very large number of tasks. This has happened once in our 
> cluster. It would be nice to have a configuration setting that limits the 
> maximum tasks that a single job can have. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to