[
https://issues.apache.org/jira/browse/HDFS-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14617909#comment-14617909
]
Hadoop QA commented on HDFS-8719:
---------------------------------
\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch | 0m 0s | The patch command could not apply
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL |
http://issues.apache.org/jira/secure/attachment/12743914/HDFS-8719-001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / c9dd2ca |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/11615/console |
This message was automatically generated.
> Erasure Coding: client generates too many small packets when writing parity
> data
> --------------------------------------------------------------------------------
>
> Key: HDFS-8719
> URL: https://issues.apache.org/jira/browse/HDFS-8719
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Li Bo
> Assignee: Li Bo
> Attachments: HDFS-8719-001.patch
>
>
> Typically a packet is about 64K, but when writing parity data, many small
> packets with size 512 bytes are generated. This may slow the write speed and
> increase the network IO.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)