[
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13250096#comment-13250096
]
Hudson commented on HDFS-3119:
------------------------------
Integrated in Hadoop-Hdfs-trunk-Commit #2104 (See
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2104/])
HDFS-3119. Overreplicated block is not deleted even after the replication
factor is reduced after sync follwed by closing that file. Contributed by
Ashish Singhi. (Revision 1311380)
Result = SUCCESS
umamahesh :
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1311380
Files :
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
*
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
*
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
*
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
> Overreplicated block is not deleted even after the replication factor is
> reduced after sync follwed by closing that file
> ------------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-3119
> URL: https://issues.apache.org/jira/browse/HDFS-3119
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: name-node
> Affects Versions: 0.24.0
> Reporter: J.Andreina
> Assignee: Ashish Singhi
> Priority: Minor
> Labels: patch
> Fix For: 0.24.0, 2.0.0
>
> Attachments: HDFS-3119-1.patch, HDFS-3119-1.patch, HDFS-3119.patch
>
>
> cluster setup:
> --------------
> 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
> step1: write a file "filewrite.txt" of size 90bytes with sync(not closed)
> step2: change the replication factor to 1 using the command: "./hdfs dfs
> -setrep 1 /filewrite.txt"
> step3: close the file
> * At the NN side the file "Decreasing replication from 2 to 1 for
> /filewrite.txt" , logs has occured but the overreplicated blocks are not
> deleted even after the block report is sent from DN
> * while listing the file in the console using "./hdfs dfs -ls " the
> replication factor for that file is mentioned as 1
> * In fsck report for that files displays that the file is replicated to 2
> datanodes
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira