[
https://issues.apache.org/jira/browse/HDFS-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13042740#comment-13042740
]
Hudson commented on HDFS-2021:
------------------------------
Integrated in Hadoop-Hdfs-trunk #685 (See
[https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/685/])
HDFS-2021. Update numBytesAcked before sending the ack in PacketResponder.
Contributed by John George
szetszwo :
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1130339
Files :
*
/hadoop/hdfs/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* /hadoop/hdfs/trunk/src/test/hdfs/org/apache/hadoop/hdfs/TestWriteRead.java
* /hadoop/hdfs/trunk/CHANGES.txt
> TestWriteRead failed with inconsistent visible length of a file
> ----------------------------------------------------------------
>
> Key: HDFS-2021
> URL: https://issues.apache.org/jira/browse/HDFS-2021
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: data-node
> Environment: Linux RHEL5
> Reporter: CW Chung
> Assignee: John George
> Fix For: 0.23.0
>
> Attachments: HDFS-2021-2.patch, HDFS-2021.patch
>
>
> The junit test failed when iterates a number of times with larger chunk size
> on Linux. Once a while, the visible number of bytes seen by a reader is
> slightly less than what was supposed to be.
> When run with the following parameter, it failed more often on Linux ( as
> reported by John George) than my Mac:
> private static final int WR_NTIMES = 300;
> private static final int WR_CHUNK_SIZE = 10000;
> Adding more debugging output to the source, this is a sample of the output:
> Caused by: java.io.IOException: readData mismatch in byte read:
> expected=2770000 ; got 2765312
> at
> org.apache.hadoop.hdfs.TestWriteRead.readData(TestWriteRead.java:141)
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira