-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24221/
-----------------------------------------------------------

(Updated Aug. 5, 2014, 3:53 a.m.)


Review request for hive, Brock Noland, Lars Francke, and Szehon Ho.


Bugs: HIVE-7567
    https://issues.apache.org/jira/browse/HIVE-7567


Repository: hive-git


Description
-------

support automatic adjusting reducer number same as MR, configure through 3 
following parameters:
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>


Diffs (updated)
-----

  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/GroupByShuffler.java abd4718 
  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SortByShuffler.java f262065 
  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java 
73553ee 
  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java fb25596 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java d7e1fbf 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SetSparkReducerParallelism.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java 75a1033 
  
ql/src/java/org/apache/hadoop/hive/ql/parse/spark/OptimizeSparkProcContext.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkCompiler.java 3840318 

Diff: https://reviews.apache.org/r/24221/diff/


Testing
-------


Thanks,

chengxiang li

Reply via email to