[ 
https://issues.apache.org/jira/browse/HADOOP-4797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raghu Angadi updated HADOOP-4797:
---------------------------------

    Attachment: TestRpcCpu.patch


Ok, benchmark with much saner results. Only difference is that this one returns 
(a Writable) ByteArray instead of naked a 'byte []' to avoid ObjectWritable 
from handling the array. CPU for 100 calls : 

  * With out the patch ~ 7000
  * With the patch ~ 1050 
  * RPC server takes 6-7 times less CPU to serve 10MB buffer.

I hope 6-7 times less is pretty good for a side benefit. 

In the he previous version of the benchmark, client reads much slower, so 10MB 
mostly requires more write() calls. The extra CPU penalty in trunk is directly 
proportional to number write() calls required to write the full buffer.

> RPC Server can leave a lot of direct buffers 
> ---------------------------------------------
>
>                 Key: HADOOP-4797
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4797
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: ipc
>    Affects Versions: 0.17.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>            Priority: Blocker
>             Fix For: 0.18.3, 0.19.1, 0.20.0
>
>         Attachments: HADOOP-4797-branch-18.patch, 
> HADOOP-4797-branch-18.patch, HADOOP-4797-branch-18.patch, HADOOP-4797.patch, 
> HADOOP-4797.patch, TestRpcCpu.patch, TestRpcCpu.patch
>
>
> RPC server unwittingly can soft-leak direct buffers. One observed case is 
> that one of the namenodes at Yahoo took 40GB of virtual memory though it was 
> configured for 24GB memory. Most of the memory outside Java heap expected to 
> be direct buffers. This shown to be because of how RPC server reads and 
> writes serialized data. The cause and proposed fix are in following comment.
>   

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to