My afs cell (as with many others) is constantly expanding and
increasing in size. At the present time we've got 13 AFS file servers
(risc/ultrix and sun/solaris) serving out ~80 GB disk space, ~160 AFS client
machines (mostly sparc5 workstations) and a whopping 21,000 accounts.
Unsurprisingly, the appetite for disk space of our user population is
constantly growing.
Our usual model for an afs file server has been to bring up an
unix server (which these days happens to be a sun sparc20 running solaris 2.3)
with multiple scsi channel cards, a SAS (single attach station) fddi network
connection (we've got our own fddi ring that plugs into the campus backbone
ring via a Cisco Router), and anywhere from 8 to 16 GByte of disk space on
which the AFS filesystems reside. Once the file server goes into production,
it is incorporated into our AFS backup scheme that consists of 3 Exabyte
tape stackers (soon to be 4 or 5) with an expect script picked up from CMU
several years ago.
I'd love to be able to solve the disk space management availability
issue once and for all (at least for a few years or so) by making available a
terabyte of disk space from either one or a handful of AFS file servers rather
than 8-16 GB. While such a centralized model may suffer from the advantages
of having data distributed across many file servers, it would have the possible
advantage of being easier to manage.
Does anyone know of any possible technological developments or
solutions that might apply here? I'm more than curious.
TIA,
Steven McElwee
--
-----------------------------------------------------------------------------
Steven McElwee | Email --> | [EMAIL PROTECTED]
OIT/System Adm | <-- US Snail Mail |
Duke University |------------------------|---------------------------
401 North Building | (919) 660-6914 (Work) | (919) 660-7029 (Fax)
Durham, NC 27706 | | (919) 971-0781 (Cellular)
-----------------------------------------------------------------------------