[
https://issues.apache.org/jira/browse/HDFS-6758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14077494#comment-14077494
]
Hadoop QA commented on HDFS-6758:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12658359/HDFS-6758.01.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 3 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 2.0.3) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 core tests{color}. The patch failed these unit tests in
hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
org.apache.hadoop.TestGenericRefresh
org.apache.hadoop.TestRefreshCallQueue
org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/7484//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7484//console
This message is automatically generated.
> block writer should pass the expected block size to DataXceiverServer
> ---------------------------------------------------------------------
>
> Key: HDFS-6758
> URL: https://issues.apache.org/jira/browse/HDFS-6758
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode, hdfs-client
> Affects Versions: 2.4.1
> Reporter: Arpit Agarwal
> Assignee: Arpit Agarwal
> Attachments: HDFS-6758.01.patch
>
>
> DataXceiver initializes the block size to the default block size for the
> cluster. This size is later used by the FsDatasetImpl when applying
> VolumeChoosingPolicy.
> {code}
> block.setNumBytes(dataXceiverServer.estimateBlockSize);
> {code}
> where
> {code}
> /**
> * We need an estimate for block size to check if the disk partition has
> * enough space. For now we set it to be the default block size set
> * in the server side configuration, which is not ideal because the
> * default block size should be a client-size configuration.
> * A better solution is to include in the header the estimated block size,
> * i.e. either the actual block size or the default block size.
> */
> final long estimateBlockSize;
> {code}
> In most cases the writer can just pass the maximum expected block size to the
> DN instead of having to use the cluster default.
--
This message was sent by Atlassian JIRA
(v6.2#6252)