[
https://issues.apache.org/jira/browse/HIVE-17018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16083411#comment-16083411
]
Chao Sun commented on HIVE-17018:
---------------------------------
What I'm thinking is that the new config (say A) will be a value smaller than
{{spark.executor.memory}} and will be divided among all tasks in the executor
(so A / {{spark.executor.cores}}).
Another way is to specify A as the maximum hashtable memory for a single Spark
task. This is the limit of the sum of the sizes for all hash tables in a single
work (MapWork or ReduceWork). I think no change is needed for the code related
to {{connectedMapJoinSize}}.
> Small table is converted to map join even the total size of small tables
> exceeds the threshold(hive.auto.convert.join.noconditionaltask.size)
> ---------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: HIVE-17018
> URL: https://issues.apache.org/jira/browse/HIVE-17018
> Project: Hive
> Issue Type: Bug
> Reporter: liyunzhang_intel
> Assignee: liyunzhang_intel
> Attachments: HIVE-17018_data_init.q, HIVE-17018.q, t3.txt
>
>
> we use "hive.auto.convert.join.noconditionaltask.size" as the threshold. it
> means the sum of size for n-1 of the tables/partitions for a n-way join is
> smaller than it, it will be converted to a map join. for example, A join B
> join C join D join E. Big table is A(100M), small tables are
> B(10M),C(10M),D(10M),E(10M). If we set
> hive.auto.convert.join.noconditionaltask.size=20M. In current code, E,D,B
> will be converted to map join but C will not be converted to map join. In my
> understanding, because hive.auto.convert.join.noconditionaltask.size can only
> contain E and D, so C and B should not be converted to map join.
> Let's explain more why E can be converted to map join.
> in current code,
> [SparkMapJoinOptimizer#getConnectedMapJoinSize|https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkMapJoinOptimizer.java#L364]
> calculates all the mapjoins in the parent path and child path. The search
> stops when encountering [UnionOperator or
> ReduceOperator|https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkMapJoinOptimizer.java#L381].
> Because C is not converted to map join because {{connectedMapJoinSize +
> totalSize) > maxSize}} [see
> code|https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkMapJoinOptimizer.java#L330].The
> RS before the join of C remains. When calculating whether B will be
> converted to map join, {{getConnectedMapJoinSize}} returns 0 as encountering
> [RS
> |https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkMapJoinOptimizer.java#409]
> and causes {{connectedMapJoinSize + totalSize) < maxSize}} matches.
> [~xuefuz] or [~jxiang]: can you help see whether this is a bug or not as you
> are more familiar with SparkJoinOptimizer.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)