[ 
https://issues.apache.org/jira/browse/HIVE-7121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14010197#comment-14010197
 ] 

Gopal V commented on HIVE-7121:
-------------------------------

[~hagleitn]: The unit tests are failing because I'm applying the same insert 
mechanic for flat & partitioned tables.

The patch works correctly when the following code fragment is hit

{code}
      // replace bucketing columns with hashcode % numBuckets
      int buckNum = 0;
      if (bucketEval != null) {
        buckNum = computeBucketNumber(row, conf.getNumBuckets());
        cachedKeys[0][buckColIdxInKey] = new IntWritable(buckNum);
      }
{code}

This is indeed setup correctly when doing dynamic partitioned inserts. Looks 
like this optimization is missed for the flat table inserts.

> Use murmur hash to distribute HiveKey
> -------------------------------------
>
>                 Key: HIVE-7121
>                 URL: https://issues.apache.org/jira/browse/HIVE-7121
>             Project: Hive
>          Issue Type: Bug
>          Components: Query Processor
>            Reporter: Gopal V
>            Assignee: Gopal V
>         Attachments: HIVE-7121.1.patch, HIVE-7121.WIP.patch
>
>
> The current hashCode implementation produces poor parallelism when dealing 
> with single integers or doubles.
> And for partitioned inserts into a 1 bucket table, there is a significant 
> hotspot on Reducer #31.
> Removing the magic number 31 and using a more normal hash algorithm would 
> help fix these hotspots.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to