Wayne,

We have had a quite similar environment. We have an AFS cell with
about 50 GB of data spread over 6 AFS fileservers, all attached to a
fddi backbone. Our central file and backup server is also connected
to the fddi ring. On this server is running OpenVison's UniTree
under EP/IX, a Unix system for which no AFS client exists. UniTree is
mounted by NFS at our AFS fileservers and we are doing the AFS
backup exactly by the way you described executing a perl script by
cron. 

Right now we are looking for new strategies. Probably we will
migrate to IBM's ADSM which is the archive/backup software running on our
new central fileserver. Actually the AFS support of ADSM needs some
improvements before we can switch. The reason for the change is
simple: the transfer rate of UniTree over NFS is too slow for backup
of large amount of data. This is not caused by insufficient resources
of the fileserver. The machine, the UniTree disk cache devices and
the mass storage behind (Metrum RSS-600 Library) are fast enough to
be not a bottleneck. It's a pure NFS/UniTree related issue. And I
guess that this is also true for NSL UniTree (please let me know if
you have other experiences). Normally using ftp to access the UniTree
filespace is much faster than NFS. It would be worthwhile to
examine this way. To avoid providing the necessary temporary file
space for vos dump you should be able to pipe vos dump directly into
ftp. We have recently detected that this is possible for our UniTree
version although not documented.

Some months ago we have also considered PSC's MR AFS. We have
decided against MR AFS for reasons which are beyond this topic. At
least the backup problem would be solved by using MR AFS.

If you are interested I can make available the perl scripts for
backup and restore based on the NFS solution and some documentation.


Werner

-- 
============================================================================
Werner Baur
Leibniz-Rechenzentrum     X.400:  S=Baur;OU=lrz;P=lrz-muenchen;A=d400;C=de
Barer Str. 21            RFC822:  [EMAIL PROTECTED]
D-80333 Muenchen           Tel.:  ++49-89-2105-8781
Germany                     Fax:  ++49-89-2809460
============================================================================



From: Wayne Schroeder <[EMAIL PROTECTED]>
Received: by number6.sdsc.edu (4.1/1.11-client)
        id AA22383; Wed, 22 Mar 95 10:54:26 PST
Date: Wed, 22 Mar 95 10:54:26 PST
Message-Id: <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Subject: alternate backups


SDSC is considering backing up our AFS data into our NSL UniTree
archival storage system*.  Since UniTree manages a large number of
tapes already, this may be preferable to having another set of tapes
for the operators to manage using the Transarc backup system.

This may be doable via a script that would run 'vos dump', sometimes
as a partial (incremental), and then store each volume file into
UniTree (creating UniTree directories based on the date).  This could
probably be a cron job that would run automatically at night.

Does anyone out there run a similar AFS backup system?  Anyone have a
script like this that they could share?

Eventually we may run PSC's MultiResident AFS which migrates
individual files to archival storage systems like UniTree.  These
copies would serve as backup too.  But we aren't currently running it.

We've been playing some with AFS for a while now but have not used it
much until recently.  We are now in the process of making it a bigger
part of our infrastructure.

Thanks much,

Wayne Schroeder
San Diego Supercomputer Center

*Supercomputer centers often run archival storage systems as support
systems for the Crays, Paragons, etc.  They are designed to manage a
moderately large number of preferably large-ish files that are
accessed infrequently.  Users store the results of their production
runs into the archive for later access.  Our archival storage system
is currently NSL UniTree running on an RS6000 with 150 GB of RAID
disk, and two STK tape silos (with 'robot' arms to move tapes in and
out of the tape drives).  It currently contains 8 TB of data in 1.45
million files on 38,000 3480 tapes and 4,000 3490 tapes.  Each silo
contains 6,000 tapes.  The other tapes are stored in racks and are
retrieved manually by operators.

Reply via email to