Rushabh S Shah commented on HDFS-12288:

Correct me if I'm wrong, but BlockReceiver and PacketResponder are created for 
each DataXceiver, therefore, I don't think it makes sense to include them in 
the calculation.
Default number of {{DFS_DATANODE_MAX_RECEIVER_THREADS_KEY}} is 4096.
For each block receiver, we create 2 threads (DataXciever and PacketResponder).
Excluding all the other threads, we can have 2048 Block receiver threads at any 
If you don't include PacketResponder thread in calculation, you will be able to 
create 4096 Block receiver threads running at a time.
Are you saying you will not create separate metric for PacketResponder but 
increment {{dataNodeActiveXceiversCount}} by 2 every you create a Block 
Receiver thread ?

> Fix DataNode's xceiver count calculation
> ----------------------------------------
>                 Key: HDFS-12288
>                 URL: https://issues.apache.org/jira/browse/HDFS-12288
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode, hdfs
>            Reporter: Lukas Majercak
>            Assignee: Lukas Majercak
>         Attachments: HDFS-12288.001.patch, HDFS-12288.002.patch
> The problem with the ThreadGroup.activeCount() method is that the method is 
> only a very rough estimate, and in reality returns the total number of 
> threads in the thread group as opposed to the threads actually running.
> In some DNs, we saw this to return 50~ for a long time, even though the 
> actual number of DataXceiver threads was next to none.
> This is a big issue as we use the xceiverCount to make decisions on the NN 
> for choosing replication source DN or returning DNs to clients for R/W.
> The plan is to reuse the DataNodeMetrics.dataNodeActiveXceiversCount value 
> which only accounts for actual number of DataXcevier threads currently 
> running and thus represents the load on the DN much better.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to