[
https://issues.apache.org/jira/browse/HADOOP-2975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ankur updated HADOOP-2975:
--------------------------
Attachment: Hadoop-2975-v1.patch
Here is the simplest of patches that fixes the issue.
- The data buffer is only allocated if its null (first time) or current data
length exceeds the buffer limit.
- Instead of setting data buffer to null after request serialization, the
buffer is simply cleared.
Not sure if there are any new unit tests that are required for this. The
existing test case - TestRPC has been tested to pass.
> IPC server should not allocate a buffer for each request
> --------------------------------------------------------
>
> Key: HADOOP-2975
> URL: https://issues.apache.org/jira/browse/HADOOP-2975
> Project: Hadoop Core
> Issue Type: Improvement
> Components: ipc
> Affects Versions: 0.16.0
> Reporter: Hairong Kuang
> Attachments: Hadoop-2975-v1.patch
>
>
> Currently the IPC server allocates a buffer for each incoming request. The
> buffer is thrown away after the request is serialized. This leads to very
> inefficient heap utilization. It would be nicer if all requests from one
> connection could share a same common buffer since the ipc server has only one
> request is being read from a socket at a time.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.