[ 
https://issues.apache.org/jira/browse/HDFS-6115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13949326#comment-13949326
 ] 

Hudson commented on HDFS-6115:
------------------------------

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1739 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1739/])
HDFS-6115. Call flush() for every append on block scan verification log.  
Contributed by Vinayakumar B (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1581936)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RollingLogsImpl.java


> flush() should be called for every append on block scan verification log
> ------------------------------------------------------------------------
>
>                 Key: HDFS-6115
>                 URL: https://issues.apache.org/jira/browse/HDFS-6115
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.3.0, 2.4.0
>            Reporter: Vinayakumar B
>            Assignee: Vinayakumar B
>            Priority: Minor
>             Fix For: 2.4.0
>
>         Attachments: HDFS-6115.patch
>
>
> {{RollingLogsImpl#out}} is  {{PrintWriter}} and it have the default 
> buffersize of 8kB.
> So until the 8kB buffer is filled all scan verification entries will not be 
> flushed, hence there will be chance loosing this scan information if the 
> datanode gets restarted. And One more scan will happen for these blocks.
> 8kB will have ~80 entries, so 80 blocks needs to be re-scanned again.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to