[
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13245282#comment-13245282
]
Hadoop QA commented on HDFS-3119:
---------------------------------
+1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12521139/HDFS-3119-1.patch
against trunk revision .
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 3 new or modified tests.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac
compiler warnings.
+1 eclipse:eclipse. The patch built with eclipse:eclipse.
+1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9)
warnings.
+1 release audit. The applied patch does not increase the total number of
release audit warnings.
+1 core tests. The patch passed unit tests in .
+1 contrib tests. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/2168//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2168//console
This message is automatically generated.
> Overreplicated block is not deleted even after the replication factor is
> reduced after sync follwed by closing that file
> ------------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-3119
> URL: https://issues.apache.org/jira/browse/HDFS-3119
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: name-node
> Affects Versions: 0.24.0
> Reporter: J.Andreina
> Assignee: Ashish Singhi
> Priority: Minor
> Labels: patch
> Fix For: 0.24.0, 0.23.2
>
> Attachments: HDFS-3119-1.patch, HDFS-3119.patch
>
>
> cluster setup:
> --------------
> 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
> step1: write a file "filewrite.txt" of size 90bytes with sync(not closed)
> step2: change the replication factor to 1 using the command: "./hdfs dfs
> -setrep 1 /filewrite.txt"
> step3: close the file
> * At the NN side the file "Decreasing replication from 2 to 1 for
> /filewrite.txt" , logs has occured but the overreplicated blocks are not
> deleted even after the block report is sent from DN
> * while listing the file in the console using "./hdfs dfs -ls " the
> replication factor for that file is mentioned as 1
> * In fsck report for that files displays that the file is replicated to 2
> datanodes
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira