[ https://issues.apache.org/jira/browse/HADOOP-4766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12656547#action_12656547 ]
Amar Kamat commented on HADOOP-4766: ------------------------------------ I ran the same setup (see [here|https://issues.apache.org/jira/browse/HADOOP-4766?focusedCommentId=12654052#action_12654052]) with default config ( i.e _mapred.jobtracker.completeuserjobs.maximum_ = 100) and found that _time-taken_ and _heap size_ values remain same. Which means the code for removing the job from memory (upon hitting the limit) is buggy. > Hadoop performance degrades significantly as more and more jobs complete > ------------------------------------------------------------------------ > > Key: HADOOP-4766 > URL: https://issues.apache.org/jira/browse/HADOOP-4766 > Project: Hadoop Core > Issue Type: Bug > Components: mapred > Affects Versions: 0.18.2, 0.19.0 > Reporter: Runping Qi > Assignee: Amar Kamat > Priority: Blocker > Fix For: 0.18.3, 0.19.1, 0.20.0 > > Attachments: HADOOP-4766-v1.patch, map_scheduling_rate.txt > > > When I ran the gridmix 2 benchmark load on a fresh cluster of 500 nodes with > hadoop trunk, > the gridmix load, consisting of 202 map/reduce jobs of various sizes, > completed in 32 minutes. > Then I ran the same set of the jobs on the same cluster, yhey completed in 43 > minutes. > When I ran them the third times, it took (almost) forever --- the job tracker > became non-responsive. > The job tracker's heap size was set to 2GB. > The cluster is configured to keep up to 500 jobs in memory. > The job tracker kept one cpu busy all the time. Look like it was due to GC. > I believe the release 0.18/0.19 have the similar behavior. > I believe 0.18 and 0.18 also have the similar behavior. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.