[ 
https://issues.apache.org/jira/browse/HDFS-1407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12911110#action_12911110
 ] 

Suresh Srinivas commented on HDFS-1407:
---------------------------------------

> One nit: writeId() and readId() should be called in Block.write() and 
> Block.readFields().
Current Block.write and read serialize BlockID, Block length, and Generation 
stamp. Calling writeId and readId will change this order to BlockID, Generation 
stamp and Block length. Is this fine?

> Use Block in DataTransferProtocol
> ---------------------------------
>
>                 Key: HDFS-1407
>                 URL: https://issues.apache.org/jira/browse/HDFS-1407
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Suresh Srinivas
>            Assignee: Suresh Srinivas
>             Fix For: 0.22.0
>
>         Attachments: HDFS-1400.trunk.patch, HDFS-1400.trunk.patch
>
>
> Currently DataTransferProtocol has methods such as:
> {noformat}
>     public static void opReadBlock(DataOutputStream out, long blockId,
>         long blockGs, long blockOffset, long blockLen, String clientName,
>         Token<BlockTokenIdentifier> blockToken) throws IOException;
> {noformat}
> The client has to pass the individual elements that make block identification 
> such as blockId and generation stamp. I propose methods with the following 
> format:
> {noformat}
>     public static void opReadBlock(DataOutputStream out, Block block,
>         long blockOffset, long blockLen, String clientName,
>         Token<BlockTokenIdentifier> blockToken) throws IOException;
> {noformat}
> With this, the client need not understand the internals of Block. It receives 
> Block over RPC and sends it in DataTransferProtocol. This helps in making 
> Block opaque to the client.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to