On Wednesday 09 June 2004 12:59 pm, Bill Moran wrote:
> Stijn Hoop <[EMAIL PROTECTED]> wrote:
> > On Wed, Jun 09, 2004 at 02:21:40PM -0500, Scott wrote:
> > > As a newbie to FreeBSD, I may be way off base, but it seems
> > > very logical to me that the size of your drive or partition
> > > would make a difference on at what percentage full one would
> > > start to notice problems.
> > >
> > > In terms of megs/gigs 80% of 120 gigs still has a lot of
> > > work space left. 80% of 4 gigs is not much. I would think
> > > with a larger drive/partition, one could run at a higher
> > > percentage before trouble started.
> > >
> > > It makes sense to me anyway :)
> > That's what one would like, but UFS doesn't work that way. It's
> > allocation algorithm assumes 10% of the disk is free -- regardless
> > of actual size. Or so I've been told (multiple times).
> > IMHO this is a bit ridiculous -- I mean, given 1 TB of space
> > (nearly feasible for a home server right now), why would an FS
> > allocator need 10% of that if the files on the volume are averaging
> > 10 MB?
> > But then again, and this is worth noting -- I'm certainly nowhere
> > near as clueful as others on how to design a stable & fast file
> > system. Seeing as UFS1 is still in use, and has been for the last
> > 20 years (think about it!), I think maybe the tradeoff might make
> > sense to an expert...
> > BTW, note that you really need to consider the perfomance drop for
> > yourself -- like others said, if the files on the volume change
> > infrequently, performance matters little, and space more so.
> I think you've missed the point.
> The designers of UFS/FFS did not design the filesystem to require 10%
> free space in order to perform well.
> They developed the best, fastest (thus the name "fast file system")
> filesystem algorithms they could come up with.
> Then, during testing, they found that these algorithms started to
> perform really poorly when the filesystem got really full. Thinking
> this might be important, they tested further until they knew exactly
> what point the performance started to drop off at. They then went
> one step further and developed another algorithm in an attempt to
> maintain as much performance as possible even when the filesystem got
> very full. This is why you'll occasionally see the "switching from
> time to space" message when your filesystem starts to fill up. The
> filesystem drivers are doing their best to degrade gracefully.
> Now, I'm not going to say that there is no more that can be done. I
> think the fact is that the two algorithms work well enough that
> nobody has bothered to invest the research into improving them.
> (That combined with the fact that disk space keeps getting cheaper
> and cheaper, makes it unlikely that anyone will invest much $$$ into
> researching how to use that last 10% while still maintaining top
I really agree with what you said here. With what they paid me an hour
before I retired, I could buy a large HD. Now 2 hours would buy a
REALLY large HD. People seem to have the tendancy to bleed the last few
drops of perfomance or space and, I think that they don't understand
basic economics. I think this is similar to expecting to do a
portupgrade -fa on a P-200 in a reasonable amount of time. I saw a
t-shirt one time about "soaring with eagles when you worked with
turkeys" I laughed at the time..Now I think that soaring with eagles
has a price and you just can't do it when your system is on the low end
My basic system has 3x30GB HDs. Why 30GB? Well, they were the smallest
ata-133 HDs that I could buy locally. Why 3 HDs?. Processes such as
buildworld work faster when your locale is spread across 3 HDs.
[EMAIL PROTECTED] mailing list
To unsubscribe, send any mail to "[EMAIL PROTECTED]"