I'm running a test Hadoop cluster, which had a dfs.replication value of 3. I'm now running out of disk space, so I've reduced dfs.replication to 1 and restarted my datanodes. Is there a way to free up the over-replicated blocks, or does this happen automatically at some point?
Thanks, Andy
