[
https://issues.apache.org/jira/browse/HIVE-25335?focusedWorklogId=771137&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-771137
]
ASF GitHub Bot logged work on HIVE-25335:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 17/May/22 04:25
Start Date: 17/May/22 04:25
Worklog Time Spent: 10m
Work Description: zhengchenyu commented on PR #3292:
URL: https://github.com/apache/hive/pull/3292#issuecomment-1128396054
@zabetak UT in my environment is right. Seems error happen in post stage.
Because I change the logical of maxDataSize, so some explain output may
changed.
Maybe many explain output should repair, so I need setup a jenkins pipeline.
Is there any introducation about hive jenkins pipeline. Many problem happen
when I setup the pipeline in my dev enviromnent.
Issue Time Tracking
-------------------
Worklog Id: (was: 771137)
Time Spent: 2h 50m (was: 2h 40m)
> Unreasonable setting reduce number, when join big size table(but small row
> count) and small size table
> ------------------------------------------------------------------------------------------------------
>
> Key: HIVE-25335
> URL: https://issues.apache.org/jira/browse/HIVE-25335
> Project: Hive
> Issue Type: Improvement
> Reporter: zhengchenyu
> Assignee: zhengchenyu
> Priority: Major
> Labels: pull-request-available
> Attachments: HIVE-25335.001.patch
>
> Time Spent: 2h 50m
> Remaining Estimate: 0h
>
> I found an application which is slow in our cluster, because the proccess
> bytes of one reduce is very huge, but only two reduce.
> when I debug, I found the reason. Because in this sql, one big size table
> (about 30G) with few row count(about 3.5M), another small size table (about
> 100M) have more row count (about 3.6M). So JoinStatsRule.process only use
> 100M to estimate reducer's number. But we need to process 30G byte in fact.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)