[
https://issues.apache.org/jira/browse/HDFS-668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12766695#action_12766695
]
Hadoop QA commented on HDFS-668:
--------------------------------
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12422310/loop.patch
against trunk revision 825689.
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 3 new or modified tests.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac
compiler warnings.
+1 findbugs. The patch does not introduce any new Findbugs warnings.
+1 release audit. The applied patch does not increase the total number of
release audit warnings.
-1 core tests. The patch failed core unit tests.
+1 contrib tests. The patch passed contrib unit tests.
Test results:
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/72/testReport/
Findbugs warnings:
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/72/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results:
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/72/artifact/trunk/build/test/checkstyle-errors.html
Console output:
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/72/console
This message is automatically generated.
> TestFileAppend3#TC7 sometimes hangs
> -----------------------------------
>
> Key: HDFS-668
> URL: https://issues.apache.org/jira/browse/HDFS-668
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Affects Versions: 0.21.0
> Reporter: Hairong Kuang
> Assignee: Hairong Kuang
> Fix For: 0.21.0
>
> Attachments: hdfs-668.patch, loop.patch
>
>
> TestFileAppend3 hangs because it fails on close the file. The following is
> the snippet of logs that shows the cause of the problem:
> [junit] 2009-10-01 07:00:00,719 WARN hdfs.DFSClient
> (DFSClient.java:setupPipelineForAppendOrRecovery(3004)) - Error Recovery for
> block blk_-4098350497078465335_1007 in pipeline 127.0.0.1:58375,
> 127.0.0.1:36982: bad datanode 127.0.0.1:36982
> [junit] 2009-10-01 07:00:00,721 INFO datanode.DataNode
> (DataXceiver.java:opWriteBlock(224)) - Receiving block
> blk_-4098350497078465335_1007 src: /127.0.0.1:40252 dest: /127.0.0.1:58375
> [junit] 2009-10-01 07:00:00,721 INFO datanode.DataNode
> (FSDataset.java:recoverClose(1248)) - Recover failed close
> blk_-4098350497078465335_1007
> [junit] 2009-10-01 07:00:00,723 INFO datanode.DataNode
> (DataXceiver.java:opWriteBlock(369)) - Received block
> blk_-4098350497078465335_1008 src: /127.0.0.1:40252 dest: /127.0.0.1:58375 of
> size 65536
> [junit] 2009-10-01 07:00:00,724 INFO hdfs.StateChange
> (BlockManager.java:addStoredBlock(1006)) - BLOCK* NameSystem.addStoredBlock:
> addStoredBlock request received for blk_-4098350497078465335_1008 on
> 127.0.0.1:58375 size 65536 But it does not belong to any file.
> [junit] 2009-10-01 07:00:00,724 INFO namenode.FSNamesystem
> (FSNamesystem.java:updatePipeline(3946)) -
> updatePipeline(block=blk_-4098350497078465335_1007, newGenerationStamp=1008,
> newLength=65536, newNodes=[127.0.0.1:58375], clientName=DFSClient_995688145)
>
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.