[ 
https://issues.apache.org/jira/browse/HADOOP-13144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16391669#comment-16391669
 ] 

Íñigo Goiri commented on HADOOP-13144:
--------------------------------------

Thanks [~ywskycn] for trying this out; I posted [^HADOOP-13144.001.patch] with 
the fixes for compilation.
I submitted the patch so Yetus should cover the next ones.

In general, I think this is touching a pretty sensitive part of the Hadoop code 
but I think the modifications are pretty minimal.
At the same time, as [~ywskycn] pointed out, it helps dramatically with the 
performance of the Routers for HDFS.
We would open a separate JIRA for the Router connection creation if this goes 
in.

Anybody available for a review?

> Enhancing IPC client throughput via multiple connections per user
> -----------------------------------------------------------------
>
>                 Key: HADOOP-13144
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13144
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: ipc
>            Reporter: Jason Kace
>            Assignee: Íñigo Goiri
>            Priority: Minor
>         Attachments: HADOOP-13144.000.patch, HADOOP-13144.001.patch
>
>
> The generic IPC client ({{org.apache.hadoop.ipc.Client}}) utilizes a single 
> connection thread for each {{ConnectionId}}.  The {{ConnectionId}} is unique 
> to the connection's remote address, ticket and protocol.  Each ConnectionId 
> is 1:1 mapped to a connection thread by the client via a map cache.
> The result is to serialize all IPC read/write activity through a single 
> thread for a each user/ticket + address.  If a single user makes repeated 
> calls (1k-100k/sec) to the same destination, the IPC client becomes a 
> bottleneck.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to