On 08/11/10 15:18, Allen Wittenauer wrote:

        Also keep in mind that unless the whole node is completely newfs'd, individual 
drive failures will cause one fs to be empty while the others have data.  There is no 
"catch up" mechanism in HDFS that will cause it to put more blocks on this 
newly empty drive.


Not yet, though there is a JIRA issue mentioning the problem. Before anyone implements it, we'll need some way of measuring the free-to-HDFS space on every disk and getting some central reports.

You can get unbalanced disks even without swapping if you are using the same set of disks for mapred temp/overspill storage. This gives you good bandwidth, but can lead to unbalanced systems, as can deletion of large files.

-Steve

Reply via email to