This setting would apply only to new files. For existing files you need to change explicitly. you can use 'hadoop fs -setsep' for changing replication for a file or directory.

Andy Liu wrote:
I'm running a test Hadoop cluster, which had a dfs.replication value of 3.
I'm now running out of disk space, so I've reduced dfs.replication to 1 and
restarted my datanodes.  Is there a way to free up the over-replicated
blocks, or does this happen automatically at some point?

Thanks,
Andy


Reply via email to