[ 
https://issues.apache.org/jira/browse/YARN-284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J moved MAPREDUCE-3268 to YARN-284:
-----------------------------------------

          Component/s:     (was: scheduler)
                       scheduler
    Affects Version/s:     (was: 0.23.0)
                       2.0.0-alpha
                  Key: YARN-284  (was: MAPREDUCE-3268)
              Project: Hadoop YARN  (was: Hadoop Map/Reduce)
    
> YARN capacity scheduler doesn't spread MR tasks evenly on an underutilized 
> cluster
> ----------------------------------------------------------------------------------
>
>                 Key: YARN-284
>                 URL: https://issues.apache.org/jira/browse/YARN-284
>             Project: Hadoop YARN
>          Issue Type: Improvement
>          Components: scheduler
>    Affects Versions: 2.0.0-alpha
>            Reporter: Todd Lipcon
>
> The fair scheduler in MR1 has the behavior that, if a job is submitted to an 
> under-utilized cluster and the cluster has more open slots than tasks in the 
> job, the tasks are spread evenly throughout the cluster. This improves job 
> latency since more spindles and NICs are utilized to complete the job. In MR2 
> I see this issue causing significantly longer job runtimes when there is 
> excess capacity in the cluster -- especially on reducers which sometimmes end 
> up clumping together on a smaller set of nodes which then become disk/network 
> constrained.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to