[
https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14959183#comment-14959183
]
Hudson commented on HDFS-9220:
------------------------------
FAILURE: Integrated in Hadoop-trunk-Commit #8645 (See
[https://builds.apache.org/job/Hadoop-trunk-Commit/8645/])
HDFS-9220. Reading small file (< 512 bytes) that is open for append (kihwal:
rev c7c36cbd6218f46c33d7fb2f60cd52cb29e6d720)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
*
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
*
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java
> Reading small file (< 512 bytes) that is open for append fails due to
> incorrect checksum
> ----------------------------------------------------------------------------------------
>
> Key: HDFS-9220
> URL: https://issues.apache.org/jira/browse/HDFS-9220
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 2.7.1
> Reporter: Bogdan Raducanu
> Assignee: Jing Zhao
> Priority: Blocker
> Fix For: 3.0.0, 2.7.2
>
> Attachments: HDFS-9220.000.patch, HDFS-9220.001.patch,
> HDFS-9220.002.patch, test2.java
>
>
> Exception:
> 2015-10-09 14:59:40 WARN DFSClient:1150 - fetchBlockByteRange(). Got a
> checksum exception for /tmp/file0.05355529331575182 at
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from
> DatanodeInfoWithStorage[10.10.10.10]:5001
> All 3 replicas cause this exception and the read fails entirely with:
> BlockMissingException: Could not obtain block:
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882
> file=/tmp/file0.05355529331575182
> Code to reproduce is attached.
> Does not happen in 2.7.0.
> Data is read correctly if checksum verification is disabled.
> More generally, the failure happens when reading from the last block of a
> file and the last block has <= 512 bytes.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)