[
https://issues.apache.org/jira/browse/HDFS-12163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16108441#comment-16108441
]
Weiwei Yang edited comment on HDFS-12163 at 8/1/17 6:27 AM:
------------------------------------------------------------
Submitted v1 patch to fix 2 issues 1) set KSM handler to 20 instead of 200 for
mini ozone cluster; 2) fix some potential leaks. Following tables are the
comparison result before and after applying the patch, I have only tested
distributed/local handler test cases,
Distributed handlers with 1 datanode
|| Mode, numOfDn || Step || NumOfThreads || Change ||
| (distributed,1) | init | 6 | 0 |
| (distributed,1) | MiniOzoneCluster | 222 | {color:red} *-180* {color} |
| (distributed,1) | shutdown | 79 | {color:red}*-3*{color} |
| (distributed,1) | sleep | 12 | {color:red}*-1*{color} |
Local handlers with 1 datanode
|| Mode, numOfDn || Step || NumOfThreads || Change ||
| (local,1) | init | 6 | {color:red}*-11*{color} |
| (local,1) | MiniOzoneCluster | 222 | {color:red}*-183*{color} |
| (local,1) | shutdown | 79 | {color:red}*-6*{color} |
| (local,1) | sleep | 12 | {color:red}*-4*{color} |
Distributed handlers with 5 datanode
|| Mode, numOfDn || Step || NumOfThreads || Change ||
| (distributed,5) | init | 6 | 0 |
| (distributed,5) | MiniOzoneCluster | 407 | {color:red}*-180*{color} |
| (distributed,5) | shutdown | 336 | {color:red}*-11*{color} |
| (distributed,5) | sleep | 13 | {color:red}*-2*{color} |
Local handlers with 5 datanode
|| Mode, numOfDn || Step || NumOfThreads || Change ||
| (local,5) | init | 16 | {color:red}*-4*{color} |
| (local,5) | MiniOzoneCluster | 408 | {color:red}*-184*{color} |
| (local,5) | shutdown | 337 | {color:red}*-15*{color} |
| (local,5) | sleep | 14 | {color:red}*-5*{color} |
Hope it makes sense.
was (Author: cheersyang):
Submitted v1 patch to fix 2 issues 1) set KSM handler to 20 instead of 200 for
mini ozone cluster; 2) fix some potential leaks. Following tables are the
comparison result before and after applying the patch, I have only tested
distributed/local handler test cases,
Distributed handlers with 1 datanode
|| Mode, numOfDn || Step || NumOfThreads || Change ||
| (distributed,1) | init | 6 | 0 |
| (distributed,1) | MiniOzoneCluster | 222 | *-180* |
| (distributed,1) | shutdown | 79 | *-3* |
| (distributed,1) | sleep | 12 | *-1* |
Local handlers with 1 datanode
|| Mode, numOfDn || Step || NumOfThreads || Change ||
| (local,1) | init | 6 | *-11* |
| (local,1) | MiniOzoneCluster | 222 | *-183* |
| (local,1) | shutdown | 79 | *-6* |
| (local,1) | sleep | 12 | *-4* |
Distributed handlers with 5 datanode
|| Mode, numOfDn || Step || NumOfThreads || Change ||
| (distributed,5) | init | 6 | 0 |
| (distributed,5) | MiniOzoneCluster | 407 | *-180* |
| (distributed,5) | shutdown | 336 | *-11* |
| (distributed,5) | sleep | 13 | *-2* |
Local handlers with 5 datanode
|| Mode, numOfDn || Step || NumOfThreads || Change ||
| (local,5) | init | 16 | *-4* |
| (local,5) | MiniOzoneCluster | 408 | *-184* |
| (local,5) | shutdown | 337 | *-15* |
| (local,5) | sleep | 14 | *-5* |
> Ozone: MiniOzoneCluster uses 400+ threads
> -----------------------------------------
>
> Key: HDFS-12163
> URL: https://issues.apache.org/jira/browse/HDFS-12163
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: ozone, test
> Reporter: Tsz Wo Nicholas Sze
> Assignee: Weiwei Yang
> Attachments: most_used_threads.png,
> TestOzoneThreadCount20170719.patch, thread_dump.png
>
>
> Checked the number of active threads used in MiniOzoneCluster with various
> settings:
> - Local handlers
> - Distributed handlers
> - Ratis-Netty
> - Ratis-gRPC
> The results are similar for all the settings. It uses 400+ threads for an
> 1-datanode MiniOzoneCluster.
> Moreover, there is a thread leak -- a number of the threads do not shutdown
> after the test is finished. Therefore, when tests run consecutively, the
> later tests use more threads.
> Will post the details in comments.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]