[
https://issues.apache.org/jira/browse/HIVE-18111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16281804#comment-16281804
]
Rui Li commented on HIVE-18111:
-------------------------------
Sorry about the delay. Patch v4 fixes an issue for vectorization and adds a
test.
The new test generates a DPP plan like following:
{noformat}
Reducer 12
Reduce Operator Tree:
Group By Operator
aggregations: max(VALUE._col0)
mode: mergepartial
outputColumnNames: _col0
Statistics: Num rows: 1 Data size: 184 Basic stats: COMPLETE
Column stats: NONE
Filter Operator
predicate: _col0 is not null (type: boolean)
Statistics: Num rows: 1 Data size: 184 Basic stats: COMPLETE
Column stats: NONE
Group By Operator
keys: _col0 (type: string)
mode: hash
outputColumnNames: _col0
Statistics: Num rows: 1 Data size: 184 Basic stats:
COMPLETE Column stats: NONE
Select Operator
expressions: _col0 (type: string)
outputColumnNames: _col0
Statistics: Num rows: 1 Data size: 184 Basic stats:
COMPLETE Column stats: NONE
Group By Operator
keys: _col0 (type: string)
mode: hash
outputColumnNames: _col0
Statistics: Num rows: 1 Data size: 184 Basic stats:
COMPLETE Column stats: NONE
Spark Partition Pruning Sink Operator
Target column: [1:p (string), 5:p (string)]
partition key expr: [p, p]
Statistics: Num rows: 1 Data size: 184 Basic stats:
COMPLETE Column stats: NONE
target works: [Map 1, Map 5]
Reducer 16
Reduce Operator Tree:
Group By Operator
aggregations: min(VALUE._col0)
mode: mergepartial
outputColumnNames: _col0
Statistics: Num rows: 1 Data size: 184 Basic stats: COMPLETE
Column stats: NONE
Filter Operator
predicate: _col0 is not null (type: boolean)
Statistics: Num rows: 1 Data size: 184 Basic stats: COMPLETE
Column stats: NONE
Group By Operator
keys: _col0 (type: string)
mode: hash
outputColumnNames: _col0
Statistics: Num rows: 2 Data size: 368 Basic stats:
COMPLETE Column stats: NONE
Select Operator
expressions: _col0 (type: string)
outputColumnNames: _col0
Statistics: Num rows: 2 Data size: 368 Basic stats:
COMPLETE Column stats: NONE
Group By Operator
keys: _col0 (type: string)
mode: hash
outputColumnNames: _col0
Statistics: Num rows: 2 Data size: 368 Basic stats:
COMPLETE Column stats: NONE
Spark Partition Pruning Sink Operator
Target column: [5:p (string)]
partition key expr: [p]
Statistics: Num rows: 2 Data size: 368 Basic stats:
COMPLETE Column stats: NONE
target works: [Map 5]
{noformat}
Without the patch, Map5 needs to read from {{TMP_PATH/5/12}} and
{{TMP_PATH/5/16}}. After the DPP sinks get combined, Map5 will have the same
base dir as Map1, and tries to read from {{TMP_PATH/1/12}} and
{{TMP_PATH/1/16}}. So the query fails because {{TMP_PATH/1/16}} doesn't exist.
With the patch, all map works will read from the same base dir, and find each
individual DPP sink's output according to the unique ID.
[~stakiar], [~xuefuz] could you take another look? Thanks.
> Fix temp path for Spark DPP sink
> --------------------------------
>
> Key: HIVE-18111
> URL: https://issues.apache.org/jira/browse/HIVE-18111
> Project: Hive
> Issue Type: Bug
> Components: Spark
> Reporter: Rui Li
> Assignee: Rui Li
> Attachments: HIVE-18111.1.patch, HIVE-18111.2.patch,
> HIVE-18111.3.patch, HIVE-18111.4.patch
>
>
> Before HIVE-17877, each DPP sink has only one target work. The output path of
> a DPP work is {{TMP_PATH/targetWorkId/dppWorkId}}. When we do the pruning,
> each map work reads DPP outputs under {{TMP_PATH/targetWorkId}}.
> After HIVE-17877, each DPP sink can have multiple target works. It's possible
> that a map work needs to read DPP outputs from multiple
> {{TMP_PATH/targetWorkId}}. To solve this, I think we can have a DPP output
> path specific to each query, e.g. {{QUERY_TMP_PATH/dpp_output}}. Each DPP
> work outputs to {{QUERY_TMP_PATH/dpp_output/dppWorkId}}. And each map work
> reads from {{QUERY_TMP_PATH/dpp_output}}.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)