On Sep 27, 2007, at 7:27 PM, Nathan Wang wrote:

Image I used Hadoop as a fault-tolerance storage. I had 10 nodes, each loaded with 200GBs. I found the nodes were overloaded and decided to add 2 new boxes with bigger disk spaces. How do I redistribute the existing data? I don't want to bump up the replication factor since the old nodes were already overloaded. It'd be very helpful if this function could be implemented
at the system level.

A HDFS block rebalancer is being written for HADOOP-1652. It is not expected to make 0.15 (next week), but should come soon after than.

-- Owen

Reply via email to