On Wed, Jun 09, 2004 at 03:59:00PM -0400, Bill Moran wrote:
> Stijn Hoop <[EMAIL PROTECTED]> wrote:
> > On Wed, Jun 09, 2004 at 02:21:40PM -0500, Scott wrote:
> > > As a newbie to FreeBSD, I may be way off base, but it seems 
> > > very logical to me that the size of your drive or partition 
> > > would make a difference on at what percentage full one would 
> > > start to notice problems.
> > > 
> > > In terms of megs/gigs 80% of 120 gigs still has a lot of 
> > > work space left. 80% of 4 gigs is not much. I would think 
> > > with a larger drive/partition, one could run at a higher 
> > > percentage before trouble started.
> > > 
> > > It makes sense to me anyway :)
> > 
> > That's what one would like, but UFS doesn't work that way.  It's
> > allocation algorithm assumes 10% of the disk is free -- regardless
> > of actual size. Or so I've been told (multiple times).
> > 
> > IMHO this is a bit ridiculous -- I mean, given 1 TB of space (nearly
> > feasible for a home server right now), why would an FS allocator need
> > 10% of that if the files on the volume are averaging 10 MB?
> > 
> > But then again, and this is worth noting -- I'm certainly nowhere near as
> > clueful as others on how to design a stable & fast file system.  Seeing as
> > UFS1 is still in use, and has been for the last 20 years (think about
> > it!), I think maybe the tradeoff might make sense to an expert...
> > 
> > BTW, note that you really need to consider the perfomance drop for yourself
> > -- like others said, if the files on the volume change infrequently,
> > performance matters little, and space more so.
> I think you've missed the point.

I most certainly do that a lot of the time :)

> The designers of UFS/FFS did not design the filesystem to require 10% free
> space in order to perform well.

OK, I did not know that.

> They developed the best, fastest (thus the name "fast file system") filesystem
> algorithms they could come up with.

That I knew, and still experience every day :)

> Then, during testing, they found that these algorithms started to perform
> really poorly when the filesystem got really full.  Thinking this might be
> important, they tested further until they knew exactly what point the
> performance started to drop off at.  They then went one step further and
> developed another algorithm in an attempt to maintain as much performance
> as possible even when the filesystem got very full.  This is why you'll
> occasionally see the "switching from time to space" message when your
> filesystem starts to fill up. The filesystem drivers are doing their best
> to degrade gracefully.

I understand.

> Now, I'm not going to say that there is no more that can be done.  I think the
> fact is that the two algorithms work well enough that nobody has bothered to
> invest the research into improving them.  (That combined with the fact that
> disk space keeps getting cheaper and cheaper, makes it unlikely that anyone
> will invest much $$$ into researching how to use that last 10% while still
> maintaining top performance).

Well, although disk is cheap, seen absolutely it's still a lot of space that's
"wasted". I do understand the issues, and your posts, this and the previous
reply, have made things clearer -- thanks. 


"I'm not under the alkafluence of inkahol that some thinkle peep I am.  It's
just the drunker I sit here the longer I get."

Attachment: pgphV1jJcP3xU.pgp
Description: PGP signature

Reply via email to