Hi, I have troubles when I add new directories to Datanode with ambari. After I add a new directory to the Datanode configure file and restart services, I upload many data to hdfs. However, the new directory is not used,and the old directory is full. I have tried to modify the parameter "Reserved space for HDFS", but it has no effect at all. And I also find another tip from Hadoop faq: 3.12. On an individual data node, how do you balance the blocks on the disk? Hadoop currently does not have a method by which to do this automatically. To do this manually: Take down the HDFS Use the UNIX mv command to move the individual blocks and meta pairs from one directory to another on each host Restart the HDFS I want to know if there is any method to fix up this problem more effectively? Thanks a lot.
[email protected]
