[ 
https://issues.apache.org/jira/browse/HIVE-7121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14010149#comment-14010149
 ] 

Gunther Hagleitner commented on HIVE-7121:
------------------------------------------

[~appodictic] I think you're right. This definitely affects bucketing. 

Options I see are:

- Only do it for queries that do not enter into bucketed tables, i.e.: leave 
the bucketing hash function as badly distributed as it is, but fix shuffle 
joins, group bys and inserts into other tables.
- Remember the hash function in table metadata. This is slightly tricky because 
we probably don't want a mix of hash functions in the same table (different 
partitions have different bucketing schemes - that would probably destroy any 
chance of SMB on that table.) Maybe we even want only one function per DB to 
make sure different tables in a DB can be joined without looking at the hash 
function used for each.

How come though these unit tests are failing? I didn't think we changed the 
bucketing scheme between hive 12 and 13. Did we?

> Use murmur hash to distribute HiveKey
> -------------------------------------
>
>                 Key: HIVE-7121
>                 URL: https://issues.apache.org/jira/browse/HIVE-7121
>             Project: Hive
>          Issue Type: Bug
>          Components: Query Processor
>            Reporter: Gopal V
>            Assignee: Gopal V
>         Attachments: HIVE-7121.1.patch, HIVE-7121.WIP.patch
>
>
> The current hashCode implementation produces poor parallelism when dealing 
> with single integers or doubles.
> And for partitioned inserts into a 1 bucket table, there is a significant 
> hotspot on Reducer #31.
> Removing the magic number 31 and using a more normal hash algorithm would 
> help fix these hotspots.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to