Hi jason,

Looked around the source-code, both parameters are not deprecated.

dfs.namenode.service.handler.count : Specifies the number of
threads for NameNode to handle RPC requests from DataNodes,
standby NameNode, and all other non-client nodes
(BackupNode and SecondaryNameNode). Default to 10.
This parameter is valid only if dfs.namenode.servicerpc-address
is configured. If the parameter is configured, NameNode
initializes extra RPC server to handle requests from
non-client nodes.

dfs.namenode.handler.count : Specifies the number of threads
for NameNode to handle RPC requests. Default to 10.

I'll file a jira to document dfs.namenode.service.handler.count.

Regards,
Akira

On 05/20/2015 12:30 PM, jason lu wrote:
Hi,

what is the difference between “dfs.namenode.service.handler.count” and “dfs.namenode.handler.count” in hdfs-site.xml?
 I found this in stackoverflow:
# The RPC Server needs threads to handle requests- Hadoop ships with his own RPC framework and you can configure this with the |dfs.namenode.service.handler.count| *for the datanodes* it is default set to 10. Or you can configure this |dfs.namenode.handler.count| *for other clients*, like MapReduce jobs, |JobClients| that want to run a job. When a request comes in and it want to create a new handler, it go may out of memory (new Threads are also allocating a good chunk of stack space, maybe you need to increase this).

I can’t find the dfs.namenode.service.handler.count in hdfs-site.xml of hadoop 2.3, is is deprecated? but I found it in the source.

BTW, any suggestion for these parameters?

thanks.



Reply via email to