Steve,

I'm not sure if this helps, but we have 11 AFS servers that today have 
about 700GB of disk, and we think we can grow to 1TB.  All servers are 
SPARC 10/51s running Solaris 2.3 with AFS 3.3a, and we think each server 
can have about 100GBs or more attached.  We use an SBus expansion unit 
to give us an additional 6 SBus slots, but it takes one slot on the CPU. 
This gives us 8 SCSI SBus slots (six on the expansion chassis & two in 
the CPU, one slot for an FDDI card, and one slot for the expansion 
chassis card.

Now on each SCSI controller, we have 4ea 4GB drives or 2ea 8GB drives.  
So we have enough room for 128GBs of disk per server.  We have not 
filled a server yet, but one is sitting around 100GBs today.  We hope to 
expand this by moving to UltraSPARCs later this year, but we will see.

Anyway, the biggest problem with this much data is backing it up as you 
indicate!  We took the same scripts and modified them to run with DLT 
stackers.  Today, we have 5 DLT stackers, and our fulls run in just over 
24 hours.  The backup machine is a SPARCcenter 1000, and is on the FDDI 
ring with the servers.  We are in the process of changing these scripts 
so we can move to staggered backups, which should reduce this time 
overall.  However, the one server that has 100GBs is taking 24 hours by 
itself.  We also plan to change our scripts so they can devide each 
backup into a specified abount of space...this will allow us to split 
our backups on one server.

Anyway, I hope this helps a bit?  If you have other questions, just let 
me know...Mic

Steven McElwee wrote:
> 
>         My afs cell (as with many others) is constantly expanding and
> increasing in size. At the present time we've got 13 AFS file servers
> (risc/ultrix and sun/solaris) serving out ~80 GB disk space, ~160 AFS client
> machines (mostly sparc5 workstations) and a whopping 21,000 accounts.
> Unsurprisingly, the appetite for disk space of our user population is
> constantly growing.
> 
>         Our usual model for an afs file server has been to bring up an
> unix server (which these days happens to be a sun sparc20 running solaris 2.3)
> with multiple scsi channel cards, a SAS (single attach station) fddi network
> connection (we've got our own fddi ring that plugs into the campus backbone
> ring via a Cisco Router), and anywhere from 8 to 16 GByte of disk space on
> which the AFS filesystems reside. Once the file server goes into production,
> it is incorporated into our AFS backup scheme that consists of 3 Exabyte
> tape stackers (soon to be 4 or 5) with an expect script picked up from CMU
> several years ago.
> 
>         I'd love to be able to solve the disk space management availability
> issue once and for all (at least for a few years or so) by making available a
> terabyte of disk space from either one or a handful of AFS file servers rather
> than 8-16 GB. While such a centralized model may suffer from the advantages
> of having data distributed across many file servers, it would have the possible
> advantage of being easier to manage.
> 
>         Does anyone know of any possible technological developments or
> solutions that might apply here? I'm more than curious.
> 
> TIA,
> Steven McElwee
>  --
>  -----------------------------------------------------------------------------
>  Steven McElwee          |         Email -->      |  [EMAIL PROTECTED]
>  OIT/System Adm          |   <-- US Snail Mail    |
>  Duke University         |------------------------|---------------------------
>  401 North Building      |  (919) 660-6914 (Work) |  (919) 660-7029 (Fax)
>  Durham, NC 27706        |                        |  (919) 971-0781 (Cellular)
>  -----------------------------------------------------------------------------

Reply via email to