On Fri, 26 Jan 1996, Steven McElwee wrote:
> My afs cell (as with many others) is constantly expanding and
> increasing in size. At the present time we've got 13 AFS file servers
> (risc/ultrix and sun/solaris) serving out ~80 GB disk space, ~160 AFS client
> machines (mostly sparc5 workstations) and a whopping 21,000 accounts.
> Unsurprisingly, the appetite for disk space of our user population is
> constantly growing.
>
> Our usual model for an afs file server has been to bring up an
> unix server (which these days happens to be a sun sparc20 running solaris 2.3)
> with multiple scsi channel cards, a SAS (single attach station) fddi network
> connection (we've got our own fddi ring that plugs into the campus backbone
> ring via a Cisco Router), and anywhere from 8 to 16 GByte of disk space on
> which the AFS filesystems reside. Once the file server goes into production,
> it is incorporated into our AFS backup scheme that consists of 3 Exabyte
> tape stackers (soon to be 4 or 5) with an expect script picked up from CMU
> several years ago.
>
> I'd love to be able to solve the disk space management availability
> issue once and for all (at least for a few years or so) by making available a
> terabyte of disk space from either one or a handful of AFS file servers rather
> than 8-16 GB. While such a centralized model may suffer from the advantages
> of having data distributed across many file servers, it would have the possible
> advantage of being easier to manage.
>
> Does anyone know of any possible technological developments or
> solutions that might apply here? I'm more than curious.
>
We are running a cell with more than 1.5 Tbyte of data.
However, to have all these data online on disk seems a little
too expensive. Therefore we are running Multiple-Resident-AFS from
PSC (Pittsburgh Supercomputer Center) which allows for data migration
onto tape (robot systems controlled by Cray's DMF or Unitree etc.).
This software would also allow you to have big partitions and big volumes
(our biggest volumes has 78 GB, right now). But except for the Cray
environment the fileserver does synchronous I/O, so you would end up
in a performance bottle-neck if you put too much data into a single
fileserver.
Hartmut
-----------------------------------------------------------------
Hartmut Reuter e-mail [EMAIL PROTECTED]
phone +49-89-3299-1328
RZG (Rechenzentrum Garching) fax +49-89-3299-1301
Computing Center of the Max-Planck-Gesellschaft (MPG) and the
Institut fuer Plasmaphysik (IPP)
-----------------------------------------------------------------