[
https://issues.apache.org/jira/browse/HDFS-7308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14339979#comment-14339979
]
Hadoop QA commented on HDFS-7308:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12701288/HDFS-7308.2.patch
against trunk revision 8ca0d95.
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 1 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 2.0.3) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 core tests{color}. The patch failed these unit tests in
hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/9679//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9679//console
This message is automatically generated.
> DFSClient write packet size may > 64kB
> --------------------------------------
>
> Key: HDFS-7308
> URL: https://issues.apache.org/jira/browse/HDFS-7308
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: hdfs-client
> Reporter: Tsz Wo Nicholas Sze
> Assignee: Takuya Fukudome
> Priority: Minor
> Attachments: HDFS-7308.1.patch, HDFS-7308.2.patch
>
>
> In DFSOutputStream.computePacketChunkSize(..),
> {code}
> private void computePacketChunkSize(int psize, int csize) {
> final int chunkSize = csize + getChecksumSize();
> chunksPerPacket = Math.max(psize/chunkSize, 1);
> packetSize = chunkSize*chunksPerPacket;
> if (DFSClient.LOG.isDebugEnabled()) {
> ...
> }
> }
> {code}
> We have the following
> || variables || usual values ||
> | psize | dfsClient.getConf().writePacketSize = 64kB |
> | csize | bytesPerChecksum = 512B |
> | getChecksumSize(), i.e. CRC size | 32B |
> | chunkSize = csize + getChecksumSize() | 544B (not a power of two) |
> | psize/chunkSize | 120.47 |
> | chunksPerPacket = max(psize/chunkSize, 1) | 120 |
> | packetSize = chunkSize*chunksPerPacket (not including header) | 65280B |
> | PacketHeader.PKT_MAX_HEADER_LEN | 33B |
> | actual packet size | 65280 + 33 = *65313* < 65536 = 64k |
> It is fortunate that the usual packet size = 65313 < 64k although the
> calculation above does not guarantee it always happens (e.g. if
> PKT_MAX_HEADER_LEN=257, then actual packet size=65537 > 64k.) We should fix
> the computation in order to guarantee actual packet size < 64k.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)