Sahil Takiar commented on HIVE-15104:

Hey [~lirui] I found some time to do some internal testing of this patch. I ran 
a 1 TB Parquet TPC-DS benchmark (subset of 49 queries, run three times each) on 
a physical cluster and found similar results to your TPC-DS benchmark.
|| ||Baseline Run (default configuration)||Optimized Serde||Optimized Serde + 
No GroupBy||
||Shuffle Bytes Read|699.5 GB|530.7 GB|531.6 GB|
||Shuffle Bytes Written|690 GB|529.5 GB|530.3 GB|
||Total Latency (min)|202|191|190|

So about a 25% improvement on shuffle data and 5% performance improvement. I 
think the improvement for the shuffle data is significant and is a good 

A few questions on the implementation.
 * The {{HiveKryoRegistrator}} still seems to be serializing the {{hashCode}} 
so where are the actual savings coming from?
 * I'm not sure I understand why the performance should improve when 
{{hive.spark.use.groupby.shuffle}} is set to {{false}}. It's still using the 
same registrator right?
 * You said that we need to serialize the {{hashCode}} because "{{The cached 
RDDs will be serialized when stored to disk or transferred via network, then we 
need the hash code after the data is deserialized"}} - why do we need the 
{{hashCode}} after deserializing the data?

> Hive on Spark generate more shuffle data than hive on mr
> --------------------------------------------------------
>                 Key: HIVE-15104
>                 URL: https://issues.apache.org/jira/browse/HIVE-15104
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>    Affects Versions: 1.2.1
>            Reporter: wangwenli
>            Assignee: Rui Li
>            Priority: Major
>             Fix For: 3.0.0
>         Attachments: HIVE-15104.1.patch, HIVE-15104.10.patch, 
> HIVE-15104.2.patch, HIVE-15104.3.patch, HIVE-15104.4.patch, 
> HIVE-15104.5.patch, HIVE-15104.6.patch, HIVE-15104.7.patch, 
> HIVE-15104.8.patch, HIVE-15104.9.patch, TPC-H 100G.xlsx
> the same sql,  running on spark  and mr engine, will generate different size 
> of shuffle data.
> i think it is because of hive on mr just serialize part of HiveKey, but hive 
> on spark which using kryo will serialize full of Hivekey object.  
> what is your opionion?

This message was sent by Atlassian JIRA

Reply via email to