[ https://issues.apache.org/jira/browse/HADOOP-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12486789 ]
Koji Noguchi commented on HADOOP-1189: -------------------------------------- On one node with full drive, it showed something like _____: /dev/_ 190451020 181125476 0 100% /___/____ Total, used and available space. It doesn't add up because os reserve some space for disk de-frag. Could this be a reason? > Still seeing some unexpected 'No space left on device' exceptions > ----------------------------------------------------------------- > > Key: HADOOP-1189 > URL: https://issues.apache.org/jira/browse/HADOOP-1189 > Project: Hadoop > Issue Type: Bug > Components: dfs > Affects Versions: 0.12.2 > Reporter: Raghu Angadi > Assigned To: Raghu Angadi > Fix For: 0.13.0 > > Attachments: HADOOP-1189.patch > > > One of the datanodes has one full partition (disk) out of four. Expected > behaviour is that datanode should skip this partition and use only the other > three. HADOOP-990 fixed some bugs related to this. It seems to work ok but > some exceptions are still seeping through. In one case there 33 of these out > 1200+ blocks written to this node. Not sure what caused this. I will submit a > patch to the prints a more useful message throw the original exception. > Two unlikely reasons I can think of are 2% reserve space (8GB in this case) > is not enough or client some how still says block size is zero in some cases. > Better error message should help here. > If you see small number of these exceptions compared to number of blocks > written, for now you don't need change anything. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.