[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated MAPREDUCE-3789:
-------------------------------

    Attachment: MAPREDUCE-3789.patch

Patch with a possible fix and extended testcase.

{{ant test}} for other tests pass:

{code}    [junit] Running org.apache.hadoop.mapred.TestCapacityScheduler
    [junit] Tests run: 35, Failures: 0, Errors: 0, Time elapsed: 62.769 sec
    [junit] Running org.apache.hadoop.mapred.TestCapacitySchedulerConf
    [junit] Tests run: 9, Failures: 0, Errors: 0, Time elapsed: 0.666 sec
    [junit] Running org.apache.hadoop.mapred.TestCapacitySchedulerServlet
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 11.041 sec
    [junit] Running org.apache.hadoop.mapred.TestCapacitySchedulerWithJobTracker
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 108.964 sec
    [junit] Running org.apache.hadoop.mapred.TestJobTrackerRestartWithCS
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 27.251 sec
{code}
                
> CapacityTaskScheduler may perform unnecessary reservations in heterogenous 
> tracker environments
> -----------------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-3789
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3789
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: scheduler
>    Affects Versions: 1.1.0
>            Reporter: Harsh J
>            Assignee: Harsh J
>            Priority: Critical
>         Attachments: MAPREDUCE-3789.patch, MAPREDUCE-3789.patch
>
>
> Briefly, to reproduce:
> * Run JT with CapacityTaskScheduler [Say, Cluster max map = 8G, Cluster map = 
> 2G]
> * Run two TTs but with varied capacity, say, one with 4 map slot, another 
> with 3 map slots.
> * Run a job with two tasks, each demanding mem worth 4 slots at least (Map 
> mem = 7G or so).
> * Job will begin running on TT #1, but will also end up reserving the 3 slots 
> on TT #2 cause it does not check for the maximum limit of slots when 
> reserving (as it goes greedy, and hopes to gain more slots in future).
> * Other jobs that could've run on the TT #2 over 3 slots are thereby blocked 
> out due to this illogical reservation.
> I've not yet tested MR2 for this so feel free to weigh in if it affects MR2 
> as well.
> For MR1, I've attached a test case initially to indicate this. A fix that 
> checks reservations vs. max slots, to follow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to