[ 
https://issues.apache.org/jira/browse/HDFS-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14620073#comment-14620073
 ] 

Walter Su commented on HDFS-8719:
---------------------------------

bq. Do we also need to update the current writeChunk function? Also shall we 
put these two ops into the same function and always call the combined function?
Good idea. 003 patch did that.
LGTM. +1. Will commit shortly.

> Erasure Coding: client generates too many small packets when writing parity 
> data
> --------------------------------------------------------------------------------
>
>                 Key: HDFS-8719
>                 URL: https://issues.apache.org/jira/browse/HDFS-8719
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Li Bo
>            Assignee: Li Bo
>         Attachments: HDFS-8719-001.patch, HDFS-8719-HDFS-7285-001.patch, 
> HDFS-8719-HDFS-7285-002.patch, HDFS-8719-HDFS-7285-003.patch
>
>
> Typically a packet is about 64K, but when writing parity data, many small 
> packets with size 512 bytes are generated. This may slow the write speed and 
> increase the network IO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to