[ 
https://issues.apache.org/jira/browse/TEZ-1400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14092201#comment-14092201
 ] 

Bikas Saha commented on TEZ-1400:
---------------------------------

>From the test change the fix would look like a regression. The test was 
>explicitly there to check that when min and max are == 0 then the tasks are 
>immediately started. That is the intended behavior. When min==max then all 
>tasks should be started when min is reached. When min ==0 then all tasks 
>should be immediately started. That is why I dont think a fix is needed. This 
>would be a case of user error. min/max of Reducer 4 should not be set to 0 
>since the contract then implies that the tasks should be started asap. I dont 
>think we can fix the issue by breaking the contract since its a valid 
>contract. In this case the user should set the value to 0.01.

> Reducers stuck when enabling auto-reduce parallelism (MRR case)
> ---------------------------------------------------------------
>
>                 Key: TEZ-1400
>                 URL: https://issues.apache.org/jira/browse/TEZ-1400
>             Project: Apache Tez
>          Issue Type: Bug
>    Affects Versions: 0.5.0
>            Reporter: Rajesh Balamohan
>            Assignee: Rajesh Balamohan
>              Labels: performance
>         Attachments: TEZ-1400.1.patch, dag.dot
>
>
> In M -> R1 -> R2 case, if R1 is optimized by auto-parallelism R2 gets stuck 
> waiting for events.
> e.g
> Map 1: 0/1      Map 2: -/-      Map 5: 0/1      Map 6: 0/1      Map 7: 0/1    
>   Reducer 3: 0/23 Reducer 4: 0/1
> ...
> ...
> Map 1: 1/1      Map 2: 148(+13)/161     Map 5: 1/1      Map 6: 1/1      Map 
> 7: 1/1      Reducer 3: 0(+3)/3      Reducer 4: 0(+1)/1  <== Auto reduce 
> parallelism kicks in
> ..
> Map 1: 1/1      Map 2: 161/161  Map 5: 1/1      Map 6: 1/1      Map 7: 1/1    
>   Reducer 3: 3/3  Reducer 4: 0(+1)/1
> Job is stuck waiting for events in Reducer 4.
>  [fetcher [Reducer_3] #23] 
> org.apache.tez.runtime.library.common.shuffle.impl.ShuffleScheduler: copy(3 
> of 23 at 0.02 MB/s) <=== *Waiting for 20 more partitions, even though 
> Reducer3 has been optimized to use 3 reducers



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to