John Brahy wrote:
...
and over time I have learned to appreciate these, but lately I have been
creating more partitions
/usr/src
/usr/obj
are two of the ones that are suggested when rebuilding my system and I
definitely like the speed of doing a newfs to /usr/obj

Certainly handy.
On the other hand...I pretty much build by script files now, so I'm not waiting for the rm -r /usr/obj/* step anyway...just start and walk away for anywhere from an hour to a week. :)

I also have been putting mysql on it's own partition and then I got a little
crazier and added more partitions and my list has grown to this:

/
/home
/tmp
/var
/var/mysql
/usr
/usr/local
/usr/src
/usr/obj
/usr/Xbld
/usr/XF4
/usr/local
/virtualhosts

So am I going overboard? or am I missing any good partions.

yes, I'd say you are going a bit overboard. On the other hand, you can make a case for most of the examples you list under some circumstances, and I don't see any Blatently Bad ones (here are some Bad Examples: /usr/X11R6, /root /etc), though I can't think of any benefit to src or XF4 on separate partitions (though I do have an NFS src directory on my mvme88k, due to 4G not being nearly enough to build on anymore)...nor do I see any real-life benefit to a /usr/local partition.

I also would not guess that you would be doing much building in /usr/src on the same system you had that was so busy you put mysql on its own partition...so again, just because you can make a case for a separate partition on system X doesn't mean every system will see any benefit from that same partition.

when I first posted Nick Holland replied with several reasons to have
multiple partions. Those being
security, fragmentation, protecting the filesystem from overfilling,
organization and space tracking.

I think I over-convinced you. :)

does increasing the amount of partitions increase access to the files on
that partition?

Not sure I know what you mean by this...

It COULD increase access time if you have partitions which are commonly being used together at opposite ends of the disk -- for example, perhaps src and obj, or src and /usr (where the compiler and libraries are), though if speed really matters that much to you, get more disks.

Any feedback would be appreciated.

As with most things in life, ask why, don't just do by formula. There are still some cases where the "/ and swap" solution fits for testing, even though I now use it rarely (though I've wished I did a couple times!). A long time ago, I had a nice little webserver set up, then my friend Henning said, "Here, try this chroot'ed Apache patch"...which absolutely hosed my grand plans, as my /var partition was too small, as all the web documents were served from /home/<user> directories. You may note the warnings about this in the FAQ are perhaps a little over-emphasized... if you read the FAQ carefully, you can sometimes guess when something has bitten me personally. :)


There are other reasons I've since found for partitioning, however...data partitions have become my favorite lately. MULTIPLE data partitions, in fact. And yes, multiple data partitions for one application. Here's why: if your application can be forced to split data across multiple partitions, it can be easily expanded later. SO...you can start out with a 200G drive today, in a year add a 700G drive, and not have to migrate everything from one to the other (btw: it takes a long time for even a fast machine to migrate 200G of data). It also means if something goes Horribly Wrong on one partition or drive, you can (probably) get away with recovering only that one partition from backup.

Just had that happen to me this week -- E-mail archive system with well over 1T of data blew out one of its drives in a most spectacular way (short across the power supply pins), blowing out a power supply and a RAID box in the process. So..the survivors of this drive set had to be migrated to a spare RAID box, and then I made an error -- I missed the fact that the new box was set for RAID0 rather than RAID5, so after I beat on it a bit, it finally gave in and did what I apparently told it to: initialized the remaining data to zeros. So, off to the backups we went.

FORTUNATELY, this system had several drive modules, and the one that failed (fortunately again!) was the least full of the bunch, so I only had 40 or so days of restore to do. I'm rather glad the other nine months of data in the thing escaped injury! Even if the same event happened on one of the other storage modules, it wouldn't have been as catastrophic as if I had it all in one pile.

Related reason: in the case of partitions that fill with data, then you move on to another, you can remount the "filled" partitions as RO, so if, for example, a disk tosses a dead short across the power supply and you have 2T of storage suddenly lose power, you don't have to fsck the entire 2T, just the part that was mounted RW.

The amazing thing is, I believed in doing this before this event happened. Now I got a real-life example to point to, rather than just a hypothetical. :)

Nick.

Reply via email to