[ 
https://issues.apache.org/jira/browse/HIVE-18111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16262016#comment-16262016
 ] 

Rui Li commented on HIVE-18111:
-------------------------------

Operator IDs might not be unique, e.g. when we clone the operator tree, cloned 
operator will have the same ID as original operator. So the 3rd patch adds a 
unique ID to each DPP sink operator, and uses this unique ID as event source ID 
in map work.
[~stakiar], [~xuefuz] could you take a look? Thanks.

> Fix temp path for Spark DPP sink
> --------------------------------
>
>                 Key: HIVE-18111
>                 URL: https://issues.apache.org/jira/browse/HIVE-18111
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>            Reporter: Rui Li
>            Assignee: Rui Li
>         Attachments: HIVE-18111.1.patch, HIVE-18111.2.patch, 
> HIVE-18111.3.patch
>
>
> Before HIVE-17877, each DPP sink has only one target work. The output path of 
> a DPP work is {{TMP_PATH/targetWorkId/dppWorkId}}. When we do the pruning, 
> each map work reads DPP outputs under {{TMP_PATH/targetWorkId}}.
> After HIVE-17877, each DPP sink can have multiple target works. It's possible 
> that a map work needs to read DPP outputs from multiple 
> {{TMP_PATH/targetWorkId}}. To solve this, I think we can have a DPP output 
> path specific to each query, e.g. {{QUERY_TMP_PATH/dpp_output}}. Each DPP 
> work outputs to {{QUERY_TMP_PATH/dpp_output/dppWorkId}}. And each map work 
> reads from {{QUERY_TMP_PATH/dpp_output}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to