Paul Blackburn <[EMAIL PROTECTED]> writes:

> How would you recover the contents of a failed hard disk
> on an AFS fileserver if you only have the directory/file
> level backup in a tar archive? Where is the mapping between
> filenames and disk/partition?

Yes, these are problems we'd have to overcome.  But I don't think they
would be too hard.  Likely less painful than backups that take all day
long (or longer).  We're not yet at the all-day-long stage but we'll get
there eventually.

> I would also question if the tar type of backup will scale.

I've done a tar of /: and it's *much* faster than using the DFS backup
system.  (It's been a while so I don't have numbers handy.)

> If you are looking for ways to speed up backup consider options like:
> 
> a) have a backup device per fileserver (use butc)
>    this could be a stacking tape device auto cycling tapes

Got it.

> b) use high speed backup devices (eg DLT beats writeable CD)

Got it.

> c) use high speed connections (eg ultra SCSI, SSA, ATM)

Got it.

> d) make your AFS database servers pure
>    (eg only run AFS db processing, nothing else)

Got it.

> e) make your AFS database servers fast (CPU clock speed, local disk)

Getting it :)  And performance *is* better but it *still* takes much longer
than something like tar.

> e) subset your volumes to backup only data that needs to be backed up

Ya, so everybody tells us we should rearrange things but we're using DFS to
provide email to folks in their home directories so we really want a lot of
little filesets.  Maybe people are saying that DFS isn't a good choice for
serving lots of little filesets but our only real problem is the slow
backups.

Maybe somebody will have a great insight and see a way to make the
per-filset overhead much smaller :)

Reply via email to