Hey Mag,

You can bring down the datanode daemon, add the extra dfs.data.dir and
then restart. Since blocks are round robin'd the new directory will
have lower utilization (one other directories are full it will start
catching up). If that's not OK you can re-balance the directories by
hand with cp when the datanode is down (before you restart it).  If
this takes you longer than 10 minutes the blocks on that datanode will
start getting re-replicated but when you bring the datanode back up
the namenode will notice the over-replicated blocks and remove them.

Thanks,
Eli

On Wed, Apr 21, 2010 at 4:09 AM, Mag Gam <[email protected]> wrote:
> I would like to add/remove more data directories to my hdfs installation.
>
> Currently, what I do is decommision the entire node and then remove
> all content from dfs.data.dir and renable the node. But is there an
> easier way? Each of my node consists of 2TB of data and I don't want
> to waste the time...
>

Reply via email to