Replication factor is per file, and set when written.  When you wrote the
file, RF was 3.  Changing -default- replication factor for new files does
not affect existing files.

On Tue, Aug 7, 2012 at 10:23 AM, Brahma Reddy Battula <
brahmareddy.batt...@huawei.com> wrote:

>  Hi All,
>
>  I ended up with over-replicated blocks which are not getting deleted..
>
>  I did like following..
>
>  Started hadoop cluster with three DN's
>
>  Written 1k files with RF (Replication Factor) =3
>
>  Change  RF =2 and Exclude one DN from cluster using decommission
>
>  After decommission successful again include same DN(which is excluded)
> to cluster.(by removing entry in exclude file and execute refreshnode)
>
>  In UI I am able to see RF=2 but in fsck report shown as RF=3 and all
> are over-replicated blocks.
>
>
>  i) I am not getting why NN is not issuing delete command
> for over-replicate blocks.?
>
>  ii) why fsck and ui are showing different RF for same file?
>
>
>  Please correct me if I am wrong...
>
>  If it is issue I'll file?
>
>
>  Thanks And Regards
>
>  Brahma Reddy
>

Reply via email to