On 05/30/2012 01:55 PM, Nick Anderson wrote: > Thanks trying it now, just looking at it I suspect I will run out of > inodes and report out of space error.
Hakan, your script just ran me out of inodes, maybe you already had a fragmented file-system to start from and it triggered it faster. Is there some specific output from debugfs.ocfs that will tell me when I am approaching the situation? Right now I have a script running that does the following. Create 1-3 small files, size between 1k and 7k, copy the small file, prepend some data to it then move it back on top of the original. Create a large file (starting at 20M and increasing 10k each time) then loop back to small files. When the file system fills up I delete the oldest 20 large files and continue. That seems like it should cause some fragmentation. So far I am still been unable to reproduce the out of space error when I have free space and free inodes. _______________________________________________ Ocfs2-users mailing list Ocfs2-users@oss.oracle.com http://oss.oracle.com/mailman/listinfo/ocfs2-users