[
https://issues.apache.org/jira/browse/HADOOP-6313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12771235#action_12771235
]
Hadoop QA commented on HADOOP-6313:
-----------------------------------
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12423266/hflushCommon1.patch
against trunk revision 829289.
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 3 new or modified tests.
+1 javadoc. The javadoc tool did not generate any warning messages.
-1 javac. The applied patch generated 174 javac compiler warnings (more
than the trunk's current 172 warnings).
+1 findbugs. The patch does not introduce any new Findbugs warnings.
+1 release audit. The applied patch does not increase the total number of
release audit warnings.
+1 core tests. The patch passed core unit tests.
+1 contrib tests. The patch passed contrib unit tests.
Test results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/110/testReport/
Findbugs warnings:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/110/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/110/artifact/trunk/build/test/checkstyle-errors.html
Console output:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/110/console
This message is automatically generated.
> Expose flush APIs to application users
> --------------------------------------
>
> Key: HADOOP-6313
> URL: https://issues.apache.org/jira/browse/HADOOP-6313
> Project: Hadoop Common
> Issue Type: New Feature
> Components: fs
> Reporter: Hairong Kuang
> Assignee: Hairong Kuang
> Fix For: 0.21.0
>
> Attachments: hflushCommon.patch, hflushCommon1.patch
>
>
> Earlier this year, Yahoo, Facebook, and Hbase developers had a roundtable
> discussion where we agreed to support three types of flush in HDFS (API1, 2,
> and 3) and the append project aims to implement API2. Here is a proposal to
> expose these APIs to application users.
> 1. Three flush APIs
> * API1: flushes out from the address space of client into the socket to the
> data nodes. On the return of the call there is no guarantee that that data
> is out of the underlying node and no guarantee of having reached a DN. New
> readers will eventually see this data if there are no failures.
> * API2: flushes out to all replicas of the block. The data is in the buffers
> of the DNs but not on the DN's OS buffers. New readers will see the data
> after the call has returned.
> * API3: flushes out to all replicas and all replicas have done posix fsync
> equivalent - ie the OS has flushed it to the disk device (but the disk may
> have it in its cache).
> 2. Support flush APIs in FS
> * FSDataOutputStream#flush supports API1
> * FSDataOutputStream implements Syncable interface defined below. If its
> wrapped output stream (i.e. each file system's stream) is Syncable,
> FSDataOutputStream#hflush() and hsync() call its wrapped output stream's
> hflush & hsync.
> {noformat}
> public interface Syncable {
> public void hflush() throws IOException; // support API2
> public void hsync() throws IOException; // support API3
> }
> {noformat}
> * In each file system, if only hflush() is implemented, hsync() by default
> calls hflush(). If only hsync() is implemented, hflush() by default calls
> flush().
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.