[ https://issues.apache.org/jira/browse/HDFS-1057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12864108#action_12864108 ]
sam rash commented on HDFS-1057: -------------------------------- ah, sorry, meant to respond to that: I ran the unit tests I have the exercise that in a debugger and set breakpoints to verify the code path is exercised. i agree it's a complicated bit of code--I cleaned it up a tiny bit. I think in reality breaking out into 2 subclasses/methods that do the sending of packets (sendChunks method)--one for transferTo and one for the regular path. probably separate classes (non-static inner classes so as to use parent class state maybe--haven't thought the details through yet). > Concurrent readers hit ChecksumExceptions if following a writer to very end > of file > ----------------------------------------------------------------------------------- > > Key: HDFS-1057 > URL: https://issues.apache.org/jira/browse/HDFS-1057 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: data-node > Affects Versions: 0.21.0, 0.22.0 > Reporter: Todd Lipcon > Assignee: sam rash > Priority: Blocker > Attachments: conurrent-reader-patch-1.txt, > conurrent-reader-patch-2.txt, conurrent-reader-patch-3.txt > > > In BlockReceiver.receivePacket, it calls replicaInfo.setBytesOnDisk before > calling flush(). Therefore, if there is a concurrent reader, it's possible to > race here - the reader will see the new length while those bytes are still in > the buffers of BlockReceiver. Thus the client will potentially see checksum > errors or EOFs. Additionally, the last checksum chunk of the file is made > accessible to readers even though it is not stable. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.