Rick Moen wrote:
> Quoting Bill Broadley (b...@cse.ucdavis.edu):
> 
>> You mean the easy steps where you take a default install with 3 ish
>> partitions, then predict the future needs for 7-8 partitions and then tweak
>> the journal or lack or, various mount flags, and partitioning ordering to
>> minimize seeks?
> 
> Even a half-assed attempt at reducing average seek time/distance is 
> radically better than none at all.  Lowest-hanging fruit in this 
> case might be something like this:
> 
> o  Root partition taking up the first 40% of the drive.
> o  Swap taking up some reasonable size, in the middle of the drive.
> o  Var partition taking up the rest.


3 partitions is IMO quite reasonable, 8 much less so.   Although I find
splitting root and /var kind of strange.  Wouldn't that increase your seeks
quite a bit.

> Anyhow, I spent about twenty minutes originally planning the partition
> map of the linuxmafia.com server box that was ultimately destroyed
> by a power spike during a severe wind storm this past April -- and then
> about eight years running it.  You'd suggest I'd have been smarter to be
> penny wise and pound foolish by not bothering?  Really?

As mention that looks quite reasonable.

> And, I forgot to mention earlier:  It's hardly just performance that
> benefits from seek optimisation.  Hard drive _life_ will (statistically
> speaking) be greatly extended by the greatly reduced wear

Heh, statically greatly extended sounds like more of a WAG[1] to me.  Sure
less seeks = less wear, not to mention more perf.  However saying that your
drive will die when seek distance = X seems wildly overstated.  I suspect
there are numbers much more significant factors like temp, vibration, power on
hours, etc.  Do you know of any papers correlating seek number and distance
with disk life?  I read a statical analysis that google published on some
ungodly number of drives that had quite a few surprises in it.

>  It's only
> one data point, but the two 9GB SCSI2 drives killed by power spike, in
> April, had been in continuous service for 11 years.

Sounds good, I've never tried to run 9GBs that long, I think I managed 7-8
before I didn't care to pay the power/cooling/time to deal with such small
disks.  I have several file servers with 16, during their service lives I
think I lost 1 or 2.

> You know, I _can_ recall having made occasional miscalculations about
> filesystem size.   Fortunately, I hadn't forgotten how the "ln -s" 
> command works.  ;-> 

2-3 partitions sounds very reasonable.  Having to use ln is a pretty big
sacrifice IMO.  It complicates backups and totally destroys any seek locality.
 At least with a single partition the filesystem tries it's best to keep files
in the same dir a short seek distance away.  Karsten had the same problem.
Not that the normal average case doesn't justify 2-3 partitions, but I think
it's rather exceptional to justify 7-8.

> (In one case, I actually did make the root filesystem too small, but
> that became apparent within about ten minutes of installing the needed
> software, so the obvious remedy was to blow it away and do it right.)

Sure, some experimentation is needed.  I've had similar happen, especially
with things that tend to accumulate over time, like a cache of .debs that apt
likes to keep around and the like.  In many cases quite a bit of time can be
saved by not worrying about /var being a bit bigger than expected, tracking
which partition to make symbolic links to/from, etc.  Especially when you go
to adjust the partition sizes only to realize that you already forgot that you
make the links so your new numbers are off as well.

> _______________________________________________
> vox-tech mailing list
> vox-tech@lists.lugod.org
> http://lists.lugod.org/mailman/listinfo/vox-tech
_______________________________________________
vox-tech mailing list
vox-tech@lists.lugod.org
http://lists.lugod.org/mailman/listinfo/vox-tech

Reply via email to