> I think, if the disk space has the potential of filling, deleting the
> ..backups is more important than quickly re-creating them. We have this
> problem, due mainly to a large number of big volumes (>500MB) as well as
> limited DASD. (We can't afford to not back these volumes up). We now
> do our backups by server (I'd like to change that) and our script takes
> care of getting rid of the .backup's when the dump is done...
You might consider breaking up your large (>100 MB) volumes into many
smaller ones. For example, I help maintain a locker of programs that
grows to couple hundred megabytes of stored programs for about 5
platforms. We broke up the volume by platform, so each platform has
its own volume (and own replications). That way we not only broke the
space down to more manageable pieces, but it allows up to move the
volumes around easier, and allows us to tailor replication sites based
upon the usage patterns around campus.
If we were forced to keep the as a single, monolithic volume, we'd
have many more problems with backups, replications, etc. than with
small volumes. Also, the AFS Guide from Transarc recommends you keep
your volumes to under 100MB or so, if you can. In general, few
volumes require the use of large volumes, since most datasets can be
logically subdivided to under-100MB chunks. There are exceptions,
however.
If you dont mind me asking, why do you need such a large volume?
-derek