[
https://issues.apache.org/jira/browse/HADOOP-13144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15283247#comment-15283247
]
Jason Kace commented on HADOOP-13144:
-------------------------------------
One solution is to allow each user + remote address to utilize multiple
connection threads. This will require reconfiguration of the Client class to
permit a pool of connections per ConnectionId.
An alternative solution is to create multiple ConnectionIds per user + remote
address. The current {{ConnectionId}} class does not support multiple
hashCodes per user + remote address. The {{ConnectionId}} can either be
modified or made visible (attached solution) for inheritance.
Our use case for this feature requires a single user to be able to issue a
large number of RPC requests to a single NN via the IPC client. Better
throughput is required in the existing IPC client to allow up to 100k
requests/second from the same user to the same remote address.
> Enhancing IPC client throughput via multiple connections per user
> -----------------------------------------------------------------
>
> Key: HADOOP-13144
> URL: https://issues.apache.org/jira/browse/HADOOP-13144
> Project: Hadoop Common
> Issue Type: Improvement
> Components: ipc
> Reporter: Jason Kace
> Priority: Minor
> Fix For: 2.8.0
>
>
> The generic IPC client ({{org.apache.hadoop.ipc.Client}}) utilizes a single
> connection thread for each {{ConnectionId}}. The {{ConnectionId}} is unique
> to the connection's remote address, ticket and protocol. Each ConnectionId
> is 1:1 mapped to a connection thread by the client via a map cache.
> The result is to serialize all IPC read/write activity through a single
> thread for a each user/ticket + address. If a single user makes repeated
> calls (1k-100k/sec) to the same destination, the IPC client becomes a
> bottleneck.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]