[
https://issues.apache.org/jira/browse/HADOOP-3328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12611921#action_12611921
]
Hadoop QA commented on HADOOP-3328:
-----------------------------------
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12385436/HADOOP-3328.patch
against trunk revision 675078.
+1 @author. The patch does not contain any @author tags.
-1 tests included. The patch doesn't appear to include any new or modified
tests.
Please justify why no tests are needed for this patch.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac
compiler warnings.
+1 findbugs. The patch does not introduce any new Findbugs warnings.
+1 release audit. The applied patch does not increase the total number of
release audit warnings.
+1 core tests. The patch passed core unit tests.
+1 contrib tests. The patch passed contrib unit tests.
Test results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2826/testReport/
Findbugs warnings:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2826/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2826/artifact/trunk/build/test/checkstyle-errors.html
Console output:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2826/console
This message is automatically generated.
> DFS write pipeline : only the last datanode needs to verify checksum
> --------------------------------------------------------------------
>
> Key: HADOOP-3328
> URL: https://issues.apache.org/jira/browse/HADOOP-3328
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Affects Versions: 0.16.0
> Reporter: Raghu Angadi
> Assignee: Raghu Angadi
> Fix For: 0.19.0
>
> Attachments: HADOOP-3328.patch, HADOOP-3328.patch
>
>
> Currently all the datanodes in DFS write pipeline verify checksum. Since the
> current protocol includes acks from the datanodes, an ack from the last node
> could also serve as verification that checksum ok. In that sense, only the
> last datanode needs to verify checksum. Based on [this
> comment|http://issues.apache.org/jira/browse/HADOOP-1702?focusedCommentId=12575553#action_12575553]
> from HADOOP-1702, CPU consumption might go down by another 25-30% (4/14)
> after HADOOP-1702.
> Also this would make it easier to use transferTo() and transferFrom() on
> intermediate datanodes since they don't need to look at the data.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.