[ 
https://issues.apache.org/jira/browse/YARN-685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13707290#comment-13707290
 ] 

Ravi Prakash commented on YARN-685:
-----------------------------------

This is not the behavior I am seeing in 0.23 / 2.2. On a 35 node cluster with 
14*1.5 of memory, I first ran a randomtextwriter with 490 maps and 70 reduces. 
Then a sorter on the produced output. The distribution of tasks was

For 23, Map: 
     35 14
For 23, Reduce: 
      2 1
     32 2
      1 4
2.2 Map:
     35 14
2.2 Reduce:
      1 1
     33 2
      1 3

Did you mean its not exactly uniform?
                
> Capacity Scheduler is not distributing the reducers tasks across the cluster
> ----------------------------------------------------------------------------
>
>                 Key: YARN-685
>                 URL: https://issues.apache.org/jira/browse/YARN-685
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: capacityscheduler
>    Affects Versions: 2.0.4-alpha
>            Reporter: Devaraj K
>
> If we have reducers whose total memory required to complete is less than the 
> total cluster memory, it is not assigning the reducers to all the nodes 
> uniformly(~uniformly). Also at that time there are no other jobs or job tasks 
> running in the cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to