[
https://issues.apache.org/jira/browse/HADOOP-3339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12597702#action_12597702
]
Hadoop QA commented on HADOOP-3339:
-----------------------------------
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12382218/HADOOP-3339.patch
against trunk revision 656939.
+1 @author. The patch does not contain any @author tags.
-1 tests included. The patch doesn't appear to include any new or modified
tests.
Please justify why no tests are needed for this patch.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac
compiler warnings.
+1 findbugs. The patch does not introduce any new Findbugs warnings.
+1 release audit. The applied patch does not increase the total number of
release audit warnings.
-1 core tests. The patch failed core unit tests.
+1 contrib tests. The patch passed contrib unit tests.
Test results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2496/testReport/
Findbugs warnings:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2496/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2496/artifact/trunk/build/test/checkstyle-errors.html
Console output:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2496/console
This message is automatically generated.
> DFS Write pipeline does not detect defective datanode correctly if it times
> out.
> --------------------------------------------------------------------------------
>
> Key: HADOOP-3339
> URL: https://issues.apache.org/jira/browse/HADOOP-3339
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.16.0
> Reporter: Raghu Angadi
> Assignee: Raghu Angadi
> Fix For: 0.18.0
>
> Attachments: HADOOP-3339.patch, tmp-3339-dn.patch
>
>
> When DFSClient is writing to DFS, it does not correctly detect the culprit
> datanode (rather datanodes do not inform) properly if the bad node times out.
> Say, the last datanode in in 3 node pipeline is is too slow or defective. In
> this case, pipeline removes the first two datanodes in first two attempts.
> The third attempt has only the 3rd datanode in the pipeline and it will fail
> too. If the pipeline detects the bad 3rd node when the first failure occurs,
> the write will succeed in the second attempt.
> I will attach example logs of such cases. I think this should be fixed in
> 0.17.x.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.