[ https://issues.apache.org/jira/browse/HDFS-16721 ]
Jingxuan Fu deleted comment on HDFS-16721:
------------------------------------
was (Author: fujx):
My partner commited a pr to trunk. Can assign this issue to us?
> Improve the check code of the important configuration item
> “dfs.client.socket-timeout”.
> ---------------------------------------------------------------------------------------
>
> Key: HDFS-16721
> URL: https://issues.apache.org/jira/browse/HDFS-16721
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: dfsclient
> Affects Versions: 3.1.3
> Environment: Linux version 4.15.0-142-generic
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
> Reporter: Jingxuan Fu
> Assignee: Jingxuan Fu
> Priority: Major
> Labels: pull-request-available
>
> {code:java}
> <property>
> <name>dfs.client.socket-timeout</name>
> <value>60000</value>
> <description>
> Default timeout value in milliseconds for all sockets.
> </description>
> </property>{code}
> "dfs.client.socket-timeout" as the default timeout value for all sockets is
> applied in multiple places, it is a configuration item with significant
> impact, but the value of this configuration item is not checked in the source
> code and when it is set to an abnormal value just throw an overgeneralized
> exception and cannot be corrected in time , which affects the normal use of
> the program.
> {code:java}
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
> /hdfsapi/test/testhdfs.txt could only be written to 0 of the 1 minReplication
> nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this
> operation.
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2205)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2731)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
> at java.base/java.security.AccessController.doPrivileged(Native
> Method)
> at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916){code}
> So I used Precondition.checkArgument() to refine the code for checking this
> configuration item.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]