[
https://issues.apache.org/jira/browse/HIVE-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16056908#comment-16056908
]
Chao Sun commented on HIVE-11297:
---------------------------------
{quote}
So can you retest it in your env? if the operator tree is like what you
mentioned, i think all the operator tree in
spark_dynamic_partition_pruning.q.out will be different as i generated in my
env.
{quote}
Interesting.. I'm not sure what caused the difference, may be some
configurations? I've tried several times in my env and the FIL is always
followed by a SEL operator. Nevertheless, this is not an important issue. Will
take a look a the RB.
> Combine op trees for partition info generating tasks [Spark branch]
> -------------------------------------------------------------------
>
> Key: HIVE-11297
> URL: https://issues.apache.org/jira/browse/HIVE-11297
> Project: Hive
> Issue Type: Bug
> Affects Versions: spark-branch
> Reporter: Chao Sun
> Assignee: liyunzhang_intel
> Attachments: HIVE-11297.1.patch, HIVE-11297.2.patch,
> HIVE-11297.3.patch, HIVE-11297.4.patch, HIVE-11297.5.patch,
> HIVE-11297.6.patch, HIVE-11297.7.patch
>
>
> Currently, for dynamic partition pruning in Spark, if a small table generates
> partition info for more than one partition columns, multiple operator trees
> are created, which all start from the same table scan op, but have different
> spark partition pruning sinks.
> As an optimization, we can combine these op trees and so don't have to do
> table scan multiple times.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)