Chao Sun commented on HIVE-16758:

We should be allowing users to use Hive-on-Spark without additional 
configuration on a 3 node cluster. Scaling Hive-on-Spark should require 
additional configuration, not the other way around.

My only concern is whether this will affect the existing HoS jobs: i.e., before 
it could be using default value 10 but with this change it will become 1. If we 
are using {{mapreduce.client.submit.file.replication}}, and if the default 
value is 10, but cluster only has 3 nodes, will it still fail? or it will 
implicitly adjust the replication factor to 3?

> Better Select Number of Replications
> ------------------------------------
>                 Key: HIVE-16758
>                 URL: https://issues.apache.org/jira/browse/HIVE-16758
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: BELUGA BEHR
>            Assignee: BELUGA BEHR
>            Priority: Minor
>         Attachments: HIVE-16758.1.patch
> {{org.apache.hadoop.hive.ql.exec.SparkHashTableSinkOperator.java}}
> We should be smarter about how we pick a replication number.  We should add a 
> new configuration equivalent to {{mapreduce.client.submit.file.replication}}. 
>  This value should be around the square root of the number of nodes and not 
> hard-coded in the code.
> {code}
> public static final String DFS_REPLICATION_MAX = "dfs.replication.max";
> private int minReplication = 10;
>   @Override
>   protected void initializeOp(Configuration hconf) throws HiveException {
> ...
>     int dfsMaxReplication = hconf.getInt(DFS_REPLICATION_MAX, minReplication);
>     // minReplication value should not cross the value of dfs.replication.max
>     minReplication = Math.min(minReplication, dfsMaxReplication);
>   }
> {code}
> https://hadoop.apache.org/docs/r2.7.2/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml

This message was sent by Atlassian JIRA

Reply via email to