[ 
https://issues.apache.org/jira/browse/HIVE-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-15104:
--------------------------
    Attachment: HIVE-15104.1.patch

Spark needs the hash code on reducer side for the groupBy shuffling. Since 
groupBy does no ordering, reducer needs to put the shuffled data into a map to 
combine values by key, thus needing the hash code. We just need to keep the 
hash code during SerDe if groupBy shuffle is used.

Upload a PoC patch to demonstrate the idea. It disables kryo relocation which 
should not be acceptable.

Also did simple test to see the improvement. The test is to run a query: 
{{select key, count ( * ) from A group by key order by key;}}, where A contains 
40000000 records with 20 distinct keys. The measurement is the number of bytes 
written during shuffle. I tested optimize HiveKey alone, as well as optimize 
HiveKey and BytesWritable. We can see even for simple classes like 
BytesWritable, the custom SerDe does better than a generic one.
|| ||Opt(N)||Opt(Y, Key)||Opt(Y, Key + Value)||
||GBY(Y)|2269|1953|1699|
||GBY(N)|2269|1713|1460|

> Hive on Spark generate more shuffle data than hive on mr
> --------------------------------------------------------
>
>                 Key: HIVE-15104
>                 URL: https://issues.apache.org/jira/browse/HIVE-15104
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>    Affects Versions: 1.2.1
>            Reporter: wangwenli
>            Assignee: Rui Li
>         Attachments: HIVE-15104.1.patch
>
>
> the same sql,  running on spark  and mr engine, will generate different size 
> of shuffle data.
> i think it is because of hive on mr just serialize part of HiveKey, but hive 
> on spark which using kryo will serialize full of Hivekey object.  
> what is your opionion?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to