[ https://issues.apache.org/jira/browse/HIVE-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14123053#comment-14123053 ]
Brock Noland commented on HIVE-7956: ------------------------------------ Excellent research! bq. reducing the chance that a record be spilled to disk I think we need something that works in a deterministic manner. bq. We may need to add write and readFields to HiveKey in order to keep the extra hash code. I doubt such a change is acceptable. Yes, we won't want to serialize the extra bytes for other use cases. I think there might be a few other solutions: 1) Create HiveSparkKey which does serialize the hashcode 2) Enhance RowContainer serialize hashcode based on a configuration option > When inserting into a bucketed table, all data goes to a single bucket [Spark > Branch] > ------------------------------------------------------------------------------------- > > Key: HIVE-7956 > URL: https://issues.apache.org/jira/browse/HIVE-7956 > Project: Hive > Issue Type: Bug > Components: Spark > Reporter: Rui Li > Assignee: Rui Li > > I created a bucketed table: > {code} > create table testBucket(x int,y string) clustered by(x) into 10 buckets; > {code} > Then I run a query like: > {code} > set hive.enforce.bucketing = true; > insert overwrite table testBucket select intCol,stringCol from src; > {code} > Here {{src}} is a simple textfile-based table containing 40000000 records > (not bucketed). The query launches 10 reduce tasks but all the data goes to > only one of them. -- This message was sent by Atlassian JIRA (v6.3.4#6332)