[ 
https://issues.apache.org/jira/browse/TEZ-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062020#comment-17062020
 ] 

László Bodor commented on TEZ-4130:
-----------------------------------

[~belugabehr]: max bucket in hive is driven by 
https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/io/BucketCodec.java#L108
I think in tez, we could introduce a max number of grouped splits, which can be 
set by upstream applications such as hive, a tez could take care of the rest 
here:
https://github.com/apache/tez/blob/master/tez-mapreduce/src/main/java/org/apache/tez/mapreduce/grouper/TezSplitGrouper.java#L161

> Config for max task parallelism in shuffle - 
> tez.shuffle-vertex-manager.max-task-parallelism
> --------------------------------------------------------------------------------------------
>
>                 Key: TEZ-4130
>                 URL: https://issues.apache.org/jira/browse/TEZ-4130
>             Project: Apache Tez
>          Issue Type: Improvement
>            Reporter: László Bodor
>            Assignee: László Bodor
>            Priority: Major
>
> During the investigation of a customer issue, I found that tez generated a 
> dag plan containing >4k tasks. It failed for hive because of bucket number 
> limitations (4k). It can be configured properly, e.g. bigger splits 
> (tez.grouping.min-size), but maybe it would be more convenient for users to 
> config a hard limit for shuffle vertex manager.
> However, I'm not really sure if it's correct to force changing the max task 
> parallelism after split generation already happened (e.g. 
> [HiveSplitGenerator|https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/HiveSplitGenerator.java#L192-L244]):
> https://github.com/apache/tez/blob/master/tez-runtime-library/src/main/java/org/apache/tez/dag/library/vertexmanager/ShuffleVertexManager.java#L477



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to