Date: Sat, 28 Sep 2013 14:24:32 +1000 From: matthew green <m...@eterna.com.au> Message-ID: <11701.1380342...@splode.eterna.com.au>
| -o async is very dangerous. there's not even the vaguest | guarantee that even fsck can help you after a crash in | that case... All true, still it is remarkably useful - I use it all the time (incidentally, while the man page says that -o log and -o async can't be used together, if they are, the result is a panic, rather than a more graceful error message ... I should point out that I saw this on a remount, adding -o async to a filesys that had been mounted -o log - with -o log in fstab ... I haven't been inclined to panic the system more by running more tests, just removed the -o log which wasn't really needed for that filesys, it is mostly either -o async or -o ro). My strategy is to newfs a filesystem, mount it -o async, extract files into it (extracting a pkgsrc.tgz can be done in a few seconds if the system has sufficient ram for a large buffer cache - the subsequent sync / umount takes ages - but the combined time is still much less than any other strategy for filling a filesys with lots of files, and the filesys can be happily used in parallel with the sync - -o async helps less if the files are relatively big of course, but for pkgsrc it is ideal). Should the system crash for any reason while all this is happening, I simply start again, from the newfs - that is so rare (NetBSD being mostly stable, and with a UPS to guard against power problems) that the extra delay & work that might be required is irrelevant - and in any case, I suspect that I could newfs/mount -oasync/tar x/crash/newfs/mount/tar x faster than a simple tar x on a "normally" mounted filesystem, even with -olog. I also mount the filesystem I use for pkg_comp sandboxes with -o async. Again, should the system crash, I don't care, simply newfs and make the sandbox(es) again. This vastly improves compile times (particularly cleanup times - a newfs followed by repopulating the sandbox is quite fast ... even a rm -fr on the sandbox, and repopulate, is MUCH faster that "make clean" on any sizeable package with many dependencies - I do that between package builds to guarantee no accidental undesired pollution.) Of course, to do this, one must believe in filesystems as useful objects, and not simply a nuisance created out of the necessity of drives that were too small, which should be avoided wherever possible. Some of my systems have approaching 40 mounted filesystems - filesystems are first class objects - they're the unit for mount options (like -o ro, -o async, and -o log), they're the unit for exports, they're the unit for dumps. Using them intelligently makes system management much more flexible. We are still lacking some facilities that would make things even better, including filesystems that could easily grow/shrink as needed, so the argument about running out of space in one filesystem while there is plenty available in another could be ignored - it is the only argument against multiple filesysems with any real merit, and it is true only because we allow it to remain so, it doesn't have to be (Digital Unix's ADVFS proved that decades ago). There's more that could be done to improve things - including handling fsck better at startup - the system should be able to come up multi user before all filesystems are checked and mounted, only some subset (of the system with almost 40, I think it needs about 8 to function for 99% of its uses - the rest are specialised) are really needed, the rest should be checked and when ready, mounted, after the system is running -- -o log helps there, but isn't really enough (like for many of the filesystems I have, if they were never to become available, because of hardware failure or something, it should not prevent successful multi-user boot.) kre ps: I had been meaning to rant like this for some time, your message just provided the incentive today!