In response to Jerry McAllister <[EMAIL PROTECTED]>:

> On Wed, Sep 20, 2006 at 02:01:23AM +0930, Wayne Sierke wrote:
> > I was very interested to read the following written by Matthew Seaman in
> > 2004.
> >
> > 


> >         One thing you can do for any file system over about 256Mb is drop 
> > the
> >         free space reserve ('-m' option in newfs(8), or it can be modified 
> > in
> >         an existing filesystem using tunefs(8)).  1% is more than adequate 
> > if
> >         you're creating a multi-gigabyte filesystem.
> > 
> > I'm especially interested in the comment about the 'free space reserve'
> > which flies in the face of everything I can recall ever reading that has
> > always mirrored the warnings in tuning(7) and tunefs(8) about the perils
> > of reducing the reserved space below the default. However I didn't see
> > any reply to Matthew's email to repudiate his statements.
> > 
> > What are people's experiences in the field? Are the cautions now much
> > less relevant with modern hard-drive capacities and performance?
> The free space reserve is most important on file systems that root
> will need to write to in order to keep the system itself going.
> In places that you put data that is fairly controlled, you can get
> away with having a very small free space reserve, though some is
> probably a good idea.   Those tend to be the huge file systems where
> having 8% or 10% reserved makes a big difference in the amount of
> disk being tied up.   

>From "man tunefs":
"o  Settings of 5% and less force space optimization to always be
    used which will greatly increase the overhead for file
 o  The file system's ability to avoid fragmentation will be
    reduced when the total free space, including the reserve,
    drops below 15%.  As free space approaches zero, throughput
    can degrade by up to a factor of three over the performance
    obtained at a 10% threshold."

I believe those were the warnings that the OP spoke of.

My understanding, for the research I've done, is that it's the
_percentage_ of free space that's important, not the _amount_.

Furthermore, if you manually force the optimization scheme to "time",
these performance issues don't come in to play unless you actually
fill the filesystem to within 15%.

OTOH, I'm not aware of any recent research into whether or not this
still applies.  IIRC, those numbers were obtained using disks with
sizes in the megabytes.  It's possible that the percentages aren't
linear, and that disks in the gigabytes don't have the same

Sounds like a fun research project.  I'll have to make some time to
test it out.

Bill Moran
Collaborative Fusion Inc.
_______________________________________________ mailing list
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to