[ 
https://issues.apache.org/jira/browse/HDFS-1583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12981415#action_12981415
 ] 

Todd Lipcon commented on HDFS-1583:
-----------------------------------

We did this optimization for the RPC layer in HBase long ago (HBASE-82). Here's 
the current code:

https://github.com/apache/hbase/blob/trunk/src/main/java/org/apache/hadoop/hbase/io/HbaseObjectWritable.java#L387

Is there a way to do the change to ObjectWritable in a way that the new version 
can still read old data?

> Improve backup-node sync performance by wrapping RPC parameters
> ---------------------------------------------------------------
>
>                 Key: HDFS-1583
>                 URL: https://issues.apache.org/jira/browse/HDFS-1583
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>            Reporter: Liyin Liang
>             Fix For: 0.23.0
>
>         Attachments: HDFS-1583-1.patch, HDFS-1583-2.patch
>
>
> The journal edit records are sent by the active name-node to the backup-node 
> with RPC:
> {code:}
>   public void journal(NamenodeRegistration registration,
>                       int jAction,
>                       int length,
>                       byte[] records) throws IOException;
> {code}
> During the name-node throughput benchmark, the size of byte array _records_ 
> is around *8000*.  Then the serialization and deserialization is 
> time-consuming. I wrote a simple application to test RPC with byte array 
> parameter. When the size got to 8000, each RPC call need about 6 ms. While 
> name-node sync 8k byte to local disk only need  0.3~0.4ms.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to