Hi Joel & Gavin. Thanks for your replies.
> The 'free' count on df -I is an approximation. I think what > actually happened to you was fragmentation. The inode allocator could > find enough contiguous space to allocate more inodes. > What version of ocfs2 were you using? > Joel It's ocfs2 created on kernel about 2.6.18 or so, now running on 2.6.29.1. But ive recently discovered has almost no features enabled. Intend on remaking it soon. Could possibly have been quite fragmented, but it has almost 50% free space, sorry I should have included that. Does ocfs2 fragment badly with many changes? > ocfs2 1.4 has a maximum of 32000 files in any single directory - we got > bitten by this bug recently. If you're talking about 5 million files, > then is there's a possibility you've encountered this limit? I wrote something to find me any big dirs and left it running this afternoon. Found one with 20000 files but that's about it. Also discovered that an ftp client regularly opening up a dir of 5000 or so files, and then adding one, was really putting a slowdown on things. Removed those files and interacting with the fs directly seem a little more zippy now. In the case where things went wrong for me, there were literally no files in the dir yet. I had copied only 20 or so before it started erroring. If only the previous user to experience this problem had returned with more info. Andy.. _______________________________________________ Ocfs2-users mailing list Ocfs2-users@oss.oracle.com http://oss.oracle.com/mailman/listinfo/ocfs2-users