>> [ ... ] We have a process that created ~100-200 million small >> files (under 4kb) on this volume.
That's unwise for any filesystem, but very popular with application developers who are too important to read 'man gdbm' :-). >> From what I read 200 million files should not be an issue for >> JFS, correct? Definitely not, even if unwise. >> A lot of these files ended up in the same directory That's exceptionally unwise. >> and running 'ls > file' on this directory took ~7hrs. That's what you should expect from 'stat'ing 200 million inodes (as 'ls' on most GNU/Linux distributions is aliased thus). Try '/bin/ls > /tmp/file' or even better '/bin/ls -f > /tmp/file'. >> We did have a couple of unplanned reboots on this server so I >> expected some metadata corruption. fsck completed with no >> erros Just to be sure in dubious cases I run 'jfs_fsck -f' and twice. >> [ ... log issues ... ] >> Is this a "fatal" error? Well, what do you mean by "fatal"? The log contains "only" queued metadata updates, and the queue ideally gets shrunk over time. If the log were completely erased you would just lose a chunk of those. The bigger problem happens when the log gets corrupted, and some filesystems use checksums on log records to prevent that. >> I read that the maximum log size for JFS is 128MB and since >> we used default options (0.04% of 1.5TB is well over 128MB) >> our log should be the maximum already. This implies that I >> cannot increase the log-size. That is probably a bad idea because log size is proportional to rate of metadata updates rather than filesystem size. >> So please shed some light on what the log-wrap condition >> means. After the fact it probably means some partial log writes. One of the few weaknesses of JFS is that it hasn't got yet barrier support, even if writes to the log have a very short delay. >> I realize that putting this many files into one directory is >> not ideal but is it just a matter of performance or am I >> loosing/corrupting data? By itself it should be safe. JFS metadata and JFS directories in particular are based on a very robust B-Tree* implementation, and as a user I am quite confident in it. But bad storage problems (including drive or host adapter firmware bugs) can corrupt metadata including logs. That's one reason why people do backups :-). ------------------------------------------------------------------------------ Try before you buy = See our experts in action! The most comprehensive online learning library for Microsoft developers is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, Metro Style Apps, more. Free future releases when you subscribe now! http://p.sf.net/sfu/learndevnow-dev2 _______________________________________________ Jfs-discussion mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/jfs-discussion
