[ https://issues.apache.org/jira/browse/HADOOP-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Raghu Angadi updated HADOOP-1189: --------------------------------- Status: Open (was: Patch Available) I think we found the problem. Consider the following 'du .' out put: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda3 950877384 148532012 792685008 16% /export/workspace 'Capacity - Used' is 80234537 and available is 792685008. A little bit of mismatch is expected. But this could be very large. On one node, we saw 'Capacity - Used' was 21GB and available was 0. In our code we use 'Capacity - Used' instead of MIN(Capacity - Used, Available). I will submit a new patch. > Still seeing some unexpected 'No space left on device' exceptions > ----------------------------------------------------------------- > > Key: HADOOP-1189 > URL: https://issues.apache.org/jira/browse/HADOOP-1189 > Project: Hadoop > Issue Type: Bug > Components: dfs > Affects Versions: 0.12.2 > Reporter: Raghu Angadi > Assigned To: Raghu Angadi > Fix For: 0.13.0 > > Attachments: HADOOP-1189.patch > > > One of the datanodes has one full partition (disk) out of four. Expected > behaviour is that datanode should skip this partition and use only the other > three. HADOOP-990 fixed some bugs related to this. It seems to work ok but > some exceptions are still seeping through. In one case there 33 of these out > 1200+ blocks written to this node. Not sure what caused this. I will submit a > patch to the prints a more useful message throw the original exception. > Two unlikely reasons I can think of are 2% reserve space (8GB in this case) > is not enough or client some how still says block size is zero in some cases. > Better error message should help here. > If you see small number of these exceptions compared to number of blocks > written, for now you don't need change anything. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.