[ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13241441#comment-13241441
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3119:
----------------------------------------------

Instead of casting numExpectedReplicas to short, could you change the 
declarations to short?  i.e.

{code}
+++ 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
       (working copy)
@@ -2767,7 +2767,7 @@
     }
   }
 
-  public void checkReplication(Block block, int numExpectedReplicas) {
+  public void checkReplication(Block block, short numExpectedReplicas) {
     // filter out containingNodes that are marked for decommission.
     NumberReplicas number = countNodes(block);
     if (isNeededReplication(block, numExpectedReplicas, 
number.liveReplicas())) { 
===================================================================
--- 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
      (revision 1306664)
+++ 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
      (working copy)
@@ -2146,7 +2146,7 @@
    * replication factor, then insert them into neededReplication
    */
   private void checkReplicationFactor(INodeFile file) {
-    int numExpectedReplicas = file.getReplication();
+    short numExpectedReplicas = file.getReplication();
     Block[] pendingBlocks = file.getBlocks();
     int nrBlocks = pendingBlocks.length;
     for (int i = 0; i < nrBlocks; i++) {
{code}

                
> Overreplicated block is not deleted even after the replication factor is 
> reduced after sync follwed by closing that file
> ------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-3119
>                 URL: https://issues.apache.org/jira/browse/HDFS-3119
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: name-node
>    Affects Versions: 0.24.0
>            Reporter: J.Andreina
>            Assignee: Ashish Singhi
>            Priority: Minor
>             Fix For: 0.24.0, 0.23.2
>
>         Attachments: HDFS-3119.patch
>
>
> cluster setup:
> --------------
> 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
> step1: write a file "filewrite.txt" of size 90bytes with sync(not closed) 
> step2: change the replication factor to 1  using the command: "./hdfs dfs 
> -setrep 1 /filewrite.txt"
> step3: close the file
> * At the NN side the file "Decreasing replication from 2 to 1 for 
> /filewrite.txt" , logs has occured but the overreplicated blocks are not 
> deleted even after the block report is sent from DN
> * while listing the file in the console using "./hdfs dfs -ls " the 
> replication factor for that file is mentioned as 1
> * In fsck report for that files displays that the file is replicated to 2 
> datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to