dfs.replication is only used by the client at the time the files are written. Changing this setting will not automatically change the replication level on existing files. To do that, you need to use the hadoop cli:
hadoop fs -setrep -R 1 / --Mike Vladimir Klimontovich wrote: > This will happen automatically. > On Aug 27, 2009, at 6:04 PM, Andy Liu wrote: > >> I'm running a test Hadoop cluster, which had a dfs.replication value >> of 3. >> I'm now running out of disk space, so I've reduced dfs.replication to >> 1 and >> restarted my datanodes. Is there a way to free up the over-replicated >> blocks, or does this happen automatically at some point? >> >> Thanks, >> Andy > > --- > Vladimir Klimontovich, > skype: klimontovich > GoogleTalk/Jabber: [email protected] > Cell phone: +7926 890 2349 >
smime.p7s
Description: S/MIME Cryptographic Signature
