[ 
https://issues.apache.org/jira/browse/HDFS-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230191#comment-14230191
 ] 

Suresh Srinivas commented on HDFS-7435:
---------------------------------------

bq. The in-memory representation of the blocks list is merely an implementation 
detail that should not influence the PB encoding. Ie. Leave blocksBuffer as a 
non-repeating field, and let the NN chunk the block list however it sees fit.
I think the chunked buffers should be sent all the way from DataNode to the 
NameNode. NameNode can store it in a different way that fits it needs (though 
if it is chunked, I do not see NameNode storing it differently). This is 
important because the problems of large contiguous array is also a problem for 
DataNode, since there are deployments that use 60 disks in a single node with 
more than 10 million blocks in a single DataNode.

> PB encoding of block reports is very inefficient
> ------------------------------------------------
>
>                 Key: HDFS-7435
>                 URL: https://issues.apache.org/jira/browse/HDFS-7435
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode, namenode
>    Affects Versions: 2.0.0-alpha, 3.0.0
>            Reporter: Daryn Sharp
>            Assignee: Daryn Sharp
>            Priority: Critical
>         Attachments: HDFS-7435.000.patch, HDFS-7435.001.patch, HDFS-7435.patch
>
>
> Block reports are encoded as a PB repeating long.  Repeating fields use an 
> {{ArrayList}} with default capacity of 10.  A block report containing tens or 
> hundreds of thousand of longs (3 for each replica) is extremely expensive 
> since the {{ArrayList}} must realloc many times.  Also, decoding repeating 
> fields will box the primitive longs which must then be unboxed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to