Peter Staubach wrote:
Hans Reiser wrote:
Research for filesystems generally says that as you get more than 85%
full the performance goes down, by a lot as you get close to 100%. 5%
is probably too little rather than too much.
Wow. What is all that space used for? Other journalling file systems that
I have seen have limited things like journals to a much smaller space,
I thought that it wasn't just for the journal, but rather, for the
journal, metadata, and to make things saner when you fill up your drive.
Because of lazy allocation and the way things are packed, for a given
write on an almost-full disk, the FS doesn't know at the time your
write(2) call returns whether or not it succeeded or ran out of space.
So, you want to have somewhat more space available than you have RAM.
But, like the decision to keep bitmaps locked in RAM, it seems very much
overkill, and should definitely be tunable, at least. Terabytes aren't
just for the enterprise anymore.