[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13979015#comment-13979015
 ] 

Jason Lowe commented on MAPREDUCE-5856:
---------------------------------------

One related issue with allowing jobs to increase the default is that it can 
blow the memory on the history server which caches recent jobs.  In other 
words, a few jobs with huge number of counters (and a correspondingly huge AM 
heaps to handle them) might run OK but then later cause an OOM on the 
historyserver as it tries to handle all those jobs.

> Counter limits always use defaults even if JobClient is given a different 
> Configuration
> ---------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-5856
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5856
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: client
>    Affects Versions: 2.3.0, 2.4.0
>            Reporter: Robert Kanter
>            Assignee: Robert Kanter
>         Attachments: MAPREDUCE-5856.patch
>
>
> If you have a job with more than the default number of counters (i.e. > 120), 
> and you create a JobClient with a Configuration where the default is 
> increased (e.g. 500), then JobClient will throw this Exception:
> {noformat}
> org.apache.hadoop.mapreduce.counters.LimitExceededException: Too many 
> counters: 121 max=120
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to