The "reserved space" improves filesystem performance;
by having a larger pool of free space, the filesystem
has a better chance of allocating more blocks for
one file "near" each other. If the reservation space
is set very near zero, the filesystem could become
almost full & performance could suffer. All of this
is less relevant for AFS. There are plain practical
reasons not to run a fileserver partition "almost full"
(things like backup volumes make this a chancy process
at best,) and AFS probably doesn't use the berkeley fast
file system in a really optimal fashion (the fast file
system attempts to create all the inodes for one directory
"near" in the same cylinder group, if possible, but the
AFS inode hooks don't provide space for the hint needed
to make this work.) Being "very nearly" out of space
on filesystem partitions can also make it difficult
to move volumes around and so forth - the savings in
administration time by running with a reasonable amount of
extra space may well more than make up for any extra
hardware expense.
At the UofM, what we have had the most trouble deciding on,
is figuring out the amount of volume "overbooking" that
is reasonable. It's easy to measure how much space
a volume is currently using, and arrange them on partitions
such that there is plenty of free space. It's also
fairly easy to add up the quotas on a partition, but since
most volumes aren't usually full it seems reasonable to assume
not every volume is going to suddenly fill up tomorrow.
That makes it attractive to "overbook" -- the real problem
comes in because many volumes may never fill up, but
some may real soon - so it's difficult to predict just how
much overbooking is best.
So far as the % of inodes, on the other hand, that is both easy to calculate
and has interesting, important and direct consequences. The # of
inodes is the # of files you can create on a partition. If the number is too
small, it is possible to run out of inodes before running out of disk space.
That's easy to do if the average file size is fairly small, and in fact, it
turns out that for most applications, the average file size is indeed fairly
small. In fact, one of the other important optimizations of the berkeley fast
file system is fragments, and that really is especially for small files
(-- interestingly, the current PC DOS filesystem really sucks in this respect).
If you have a good sample of the files you plan to store, it's pretty
easy to determine the average size, and hence the optimal ratio of blocks per
inode. Since inodes aren't all that big (and not all software reacts
gracefully to "out of inode" errors) it's usually best to overestimate and
run out of filespace first.
-Marcus Watts
UM ITD RS Umich Systems Group