Thanks Ryan and Reynold for the information!

 

Cheers,

Tyson

 

From: Ryan Blue <rb...@netflix.com> 
Sent: Wednesday, March 6, 2019 3:47 PM
To: Reynold Xin <r...@databricks.com>
Cc: tcon...@gmail.com; Spark Dev List <dev@spark.apache.org>
Subject: Re: Hive Hash in Spark

 

I think this was needed to add support for bucketed Hive tables. Like Tyson 
noted, if the other side of a join can be bucketed the same way, then Spark can 
use a bucketed join. I have long-term plans to support this in the DataSourceV2 
API, but I don't think we are very close to implementing it yet.

 

rb

 

On Wed, Mar 6, 2019 at 1:57 PM Reynold Xin <r...@databricks.com 
<mailto:r...@databricks.com> > wrote:

  
<https://r.superhuman.com/ORW9b9xbycVLlwb2fY-QoeSH_HHnSeZymYN4tDzn6UL_xDShHZx3ZGZRs6DKmCb1ZPf4uF9VNCWT7nrUvCx-n8SpL0ovl-mTgbIbCutZjpNvJjvj3AtXVMMjGxPS9pF41rVjqBJlBzWNUNxTBUeWrM9l6yGGW80MR0tu4C-Jnxz8BhSxpDxO3Q.gif>
 

I think they might be used in bucketing? Not 100% sure.

 

 

On Wed, Mar 06, 2019 at 1:40 PM, <tcon...@gmail.com <mailto:tcon...@gmail.com> 
> wrote:

Hi,

 

I noticed the existence of a Hive Hash partitioning implementation in Spark, 
but also noticed that it’s not being used, and that the Spark hash partitioning 
function is presently hardcoded to Murmur3. My question is whether Hive Hash is 
dead code or are their future plans to support reading and understanding data 
the has been partitioned using Hive Hash? By understanding, I mean that I’m 
able to avoid a full shuffle join on Table A (partitioned by Hive Hash) when 
joining with a Table B that I can shuffle via Hive Hash to Table A. 

 

Thank you,

Tyson

 




 

-- 

Ryan Blue

Software Engineer

Netflix

Reply via email to