[ 
https://issues.apache.org/jira/browse/HIVE-17146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16102991#comment-16102991
 ] 

George Smith commented on HIVE-17146:
-------------------------------------

[~lirui] The 10 exceeds {{dfs.replication}} value, not {{dfs.replication.max}}.

Here is the stacktrace
{code:java}
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
Requested replication factor of 10 exceeds maximum of 3 for 
/tmp/hive/xxxx/5db022b0-42b8-44c1-b303-823ed6c1f133/hive_2017-07-27_10-40-52_061_7060239276893628864-1/-mr-10003/HashTable-Stage-1/MapJoin-mapfile161--.hashtable/HASHTABLESINK_218-552787044
 from x.x.x.x
        at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.verifyReplication(BlockManager.java:1027)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2589)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:595)
        at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:112)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:395)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)

        at org.apache.hadoop.ipc.Client.call(Client.java:1471)
        at org.apache.hadoop.ipc.Client.call(Client.java:1408)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
        at com.sun.proxy.$Proxy12.create(Unknown Source)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:297)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
        at com.sun.proxy.$Proxy13.create(Unknown Source)
        at 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1965)
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1738)
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1663)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:405)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:401)
        at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:401)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:344)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:920)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:901)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:827)
        at 
org.apache.hadoop.hive.ql.exec.SparkHashTableSinkOperator.flushToFile(SparkHashTableSinkOperator.java:156)
        at 
org.apache.hadoop.hive.ql.exec.SparkHashTableSinkOperator.closeOp(SparkHashTableSinkOperator.java:97)
        ... 18 more

{code}


> Spark on Hive - Exception while joining tables - "Requested replication 
> factor of 10 exceeds maximum of x" 
> -----------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-17146
>                 URL: https://issues.apache.org/jira/browse/HIVE-17146
>             Project: Hive
>          Issue Type: Bug
>          Components: Hive
>    Affects Versions: 2.1.1, 3.0.0
>            Reporter: George Smith
>            Assignee: Ashutosh Chauhan
>
> We found a bug in the current implementation of 
> [org.apache.hadoop.hive.ql.exec.SparkHashTableSinkOperator|https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/SparkHashTableSinkOperator.java]
> The *magic number 10* for minReplication factor can cause the exception when 
> the configuration parameter _dfs.replication_ is lower than 10. 
> Consider these [properties 
> configuration|https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml]
>  on our cluster (with less than 10 nodes):
> {code}
> dfs.namenode.replication.min=1
> dfs.replication=2
> dfs.replication.max=512 (that's the default value)
> {code}
> The current implementation counts target file replication as follows 
> (relevant snippets of the code):
> {code}
> private int minReplication = 10;
> ...
> int dfsMaxReplication = hconf.getInt(DFS_REPLICATION_MAX, minReplication);
>     // minReplication value should not cross the value of dfs.replication.max
> minReplication = Math.min(minReplication, dfsMaxReplication);
> ...
> FileSystem fs = path.getFileSystem(htsOperator.getConfiguration());
> short replication = fs.getDefaultReplication(path);
> ...
> int numOfPartitions = replication;
> replication = (short) Math.max(minReplication, numOfPartitions);
> //use replication value in fs.create(path, replication);
> {code}
> With a current code the used replication value is 10 and the config value 
> _dfs.replication_ is not used at all.
> There are probably more (easy) ways to fix it:
> # Set field  {code}private int minReplication = 1 ; {code} I don't see any 
> obvious reason for the value 10.    or
> # Init minReplication from config value _dfs.namenode.replication.min_ with a 
> default value 1. or
> # Count replication this way: {code}replication = Math.min(numOfPartitions, 
> dfsMaxReplication);{code} or
> # Use replication = numOfPartitions; directly
> Config value _dfs.replication_ has a default value 3 which is supposed to be 
> always lower than "dfs.replication.max", no checking is probably needed.
> Any suggestions which option to choose? 
> As a *workaround* for this issue we had to set dfs.replication.max=2, but 
> obviously _dfs.replication_ value should NOT be ignored and the problem 
> should be resolved.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to