Hi All,

I ended up with over-replicated blocks which are not getting deleted..

I did like following..

Started hadoop cluster with three DN's

Written 1k files with RF (Replication Factor) =3

Change  RF =2 and Exclude one DN from cluster using decommission

After decommission successful again include same DN(which is excluded) to 
cluster.(by removing entry in exclude file and execute refreshnode)

In UI I am able to see RF=2 but in fsck report shown as RF=3 and all are 
over-replicated blocks.


i) I am not getting why NN is not issuing delete command for over-replicate 
blocks.?

ii) why fsck and ui are showing different RF for same file?


Please correct me if I am wrong...

If it is issue I'll file?


Thanks And Regards

Brahma Reddy

Reply via email to