[ 
https://issues.apache.org/jira/browse/HDFS-15443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173615#comment-17173615
 ] 

Ayush Saxena commented on HDFS-15443:
-------------------------------------

Tried, the three tests :

{noformat}
[INFO] -------------------------------------------------------
[INFO]  T E S T S
[INFO] -------------------------------------------------------
[INFO] Running org.apache.hadoop.hdfs.TestPersistBlocks
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.304 s 
- in org.apache.hadoop.hdfs.TestPersistBlocks
[INFO] Running org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 97.033 s 
- in org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
[INFO] Running org.apache.hadoop.hdfs.TestDecommission
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.536 s 
- in org.apache.hadoop.hdfs.TestDecommission
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0
{noformat}
The other three were failing since more than 20 last builds.

Committing..

> Setting dfs.datanode.max.transfer.threads to a very small value can cause 
> strange failure.
> ------------------------------------------------------------------------------------------
>
>                 Key: HDFS-15443
>                 URL: https://issues.apache.org/jira/browse/HDFS-15443
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>            Reporter: AMC-team
>            Assignee: AMC-team
>            Priority: Major
>         Attachments: HDFS-15443.000.patch, HDFS-15443.001.patch, 
> HDFS-15443.002.patch, HDFS-15443.003.patch
>
>
> Configuration parameter dfs.datanode.max.transfer.threads is to specify the 
> maximum number of threads to use for transferring data in and out of the DN. 
> This is a vital param that need to tune carefully. 
> {code:java}
> // DataXceiverServer.java
> // Make sure the xceiver count is not exceeded
> intcurXceiverCount = datanode.getXceiverCount();
> if (curXceiverCount > maxXceiverCount) {
> thrownewIOException("Xceiver count " + curXceiverCount
> + " exceeds the limit of concurrent xceivers: "
> + maxXceiverCount);
> }
> {code}
> There are many issues that caused by not setting this param to an appropriate 
> value. However, there is no any check code to restrict the parameter. 
> Although having a hard-and-fast rule is difficult because we need to consider 
> number of cores, main memory etc, *we can prevent users from setting this 
> value to an absolute wrong value by accident.* (e.g. a negative value that 
> totally break the availability of datanode.)
> *How to fix:*
> Add proper check code for the parameter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to