[ 
https://issues.apache.org/jira/browse/HADOOP-1702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12595455#action_12595455
 ] 

Hadoop QA commented on HADOOP-1702:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12381710/HADOOP-1702.patch
  against trunk revision 654315.

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 3 new or modified tests.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

    -1 findbugs.  The patch appears to introduce 2 new Findbugs warnings.

    +1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

    +1 core tests.  The patch passed core unit tests.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2432/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2432/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2432/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2432/console

This message is automatically generated.

> Reduce buffer copies when data is written to DFS
> ------------------------------------------------
>
>                 Key: HADOOP-1702
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1702
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.14.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-1702.patch, HADOOP-1702.patch, HADOOP-1702.patch, 
> HADOOP-1702.patch, HADOOP-1702.patch, HADOOP-1702.patch, HADOOP-1702.patch, 
> HADOOP-1702.patch
>
>
> HADOOP-1649 adds extra buffering to improve write performance.  The following 
> diagram shows buffers as pointed by (numbers). Each eatra buffer adds an 
> extra copy since most of our read()/write()s match the io.bytes.per.checksum, 
> which is much smaller than buffer size.
> {noformat}
>        (1)                 (2)          (3)                 (5)
>    +---||----[ CLIENT ]---||----<>-----||---[ DATANODE ]---||--<>-> to Mirror 
>  
>    | (buffer)                  (socket)           |  (4)
>    |                                              +--||--+
>  =====                                                    |
>  =====                                                  =====
>  (disk)                                                 =====
> {noformat}
> Currently loops that read and write block data, handle one checksum chunk at 
> a time. By reading multiple chunks at a time, we can remove buffers (1), (2), 
> (3), and (5). 
> Similarly some copies can be reduced when clients read data from the DFS.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to