>       I'd love to be able to solve the disk space management availability
> issue once and for all (at least for a few years or so) by making available a

> terabyte of disk space from either one or a handful of AFS file servers
rather
> than 8-16 GB. While such a centralized model may suffer from the advantages
> of having data distributed across many file servers, it would have the
possible
> advantage of being easier to manage. 

We have around 422GB online now, scattered across around 40 fileservers.
That works out to an average of 10GM per server, but that number is
misleading, because many servers still have smaller, older disks, or
belong to specific research projects (and thus can only be used to hold
that group's volumes).

Our typical server setup is a 24-28MB Sparc 1+ with two SCSI busses,
and a maximum of 4 disks per bus.  The largest single disk we normally
put on AFS is approximately 4GB, so we can generally get anywhere from
28-32GB on a single server.  Even with Sparc 1+'s, that gives us a
reasonable level of performance.

Once thing I'd consider, in your position, is what happens if one of your
servers has to go down.  With as many servers as we have, we generally
have a few scheduled to go down each month for one reason or another -
new disks, disk replacement (with 400+GB online, occaisonally a drive
goes bad).  Additionally, if you put all your disk space on one server,
you essentially lose the ability to replicate databases and critical
volumes for fault tolerance as well as load balancing (we currently
run 3 database servers, which is reasonable for our size.  Andrew,
which has nearly an order of magnitude more users than we do, runs
with 5.  You probably should be doing the same).

With reasonably fast servers, I'd reccommend not putting more than,
say, 50-100GB per server, and then only if your network can keep up
(it sounds like you don't have that problem).

-- Jeffrey T. Hutzelman (N3NHS) <[EMAIL PROTECTED]>
   Systems Programmer, CMU SCS Research Facility
   Please send requests and problem reports to [EMAIL PROTECTED]


Reply via email to