[
https://issues.apache.org/jira/browse/HDFS-15443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17159671#comment-17159671
]
Ayush Saxena commented on HDFS-15443:
-------------------------------------
Thanx [~elgoiri]
No objections from my side
> Setting dfs.datanode.max.transfer.threads to a very small value can cause
> strange failure.
> ------------------------------------------------------------------------------------------
>
> Key: HDFS-15443
> URL: https://issues.apache.org/jira/browse/HDFS-15443
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Reporter: AMC-team
> Priority: Major
> Attachments: HDFS-15443.000.patch, HDFS-15443.001.patch
>
>
> Configuration parameter dfs.datanode.max.transfer.threads is to specify the
> maximum number of threads to use for transferring data in and out of the DN.
> This is a vital param that need to tune carefully.
> {code:java}
> // DataXceiverServer.java
> // Make sure the xceiver count is not exceeded
> intcurXceiverCount = datanode.getXceiverCount();
> if (curXceiverCount > maxXceiverCount) {
> thrownewIOException("Xceiver count " + curXceiverCount
> + " exceeds the limit of concurrent xceivers: "
> + maxXceiverCount);
> }
> {code}
> There are many issues that caused by not setting this param to an appropriate
> value. However, there is no any check code to restrict the parameter.
> Although having a hard-and-fast rule is difficult because we need to consider
> number of cores, main memory etc, *we can prevent users from setting this
> value to an absolute wrong value by accident.* (e.g. a negative value that
> totally break the availability of datanode.)
> *How to fix:*
> Add proper check code for the parameter.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]