Turbo Fredriksson <[EMAIL PROTECTED]> writes: > I'm been running OpenLDAP/OpenSSL/MIT Kerberos as main user database > for almost a year now, and earlier today I started to get worried > all of a sudden.. No special reason, maybe just my paranoid brain cells > giving up a shout I guess :) > > What files do I need to backup, and now to restore them in case of > hardware/fs failures? > > HOWTO? FAQ?
This all depends on exactly what your server configuration is, how frequently things change, and how much money you have. It also depends on how painful you want the recovery to be. The old fashioned approach would be to make sure each machine has a local tape drive, shut down each server at some periodic interval, and run ufs dump on every filesystem on that server. Lock the tapes in a vault, or perhaps cycle some off-site. Also, generate paper describing your machine hardware configuration and disk partition sizes, and make sure you have bootable media with disk partition tools, restore, and anything special needed to fixup boot block logic, and keep copies with your on-site & off-site backups. Try it all out once on a scratch machine just to be sure you can make an exact clone of one of your production machines. The old fashioned approach will ensure very high quality backups, but at a price; you have to take everything down to single user mode (or risk glitches), and you have to be *very* sure your backups are treated as state secrets. It is, neverthless, a model to study. For instance, if you care about disaster recovery, recording machine configuration remains important no matter what else you do. Some popular short-cuts include: doing backups on a running system, and using tar. If the system isn't quiescent, you run the risk of capturing an inconsistent view of the filesystem. Probably most of the information in your backup will be good. This will sound very attractive until the day you have to piece together the one big database file that isn't consistent. I don't know that either the MIT kdc code, or openldap, come with tools to evaluate the health of or repair any damage in their databases. Tar is a great tool to back up user files, application data, and to transport it between systems. It's not so great for system backups. Device inodes, "unix domain" sockets, fifos, and so forth can be a problem for tar. Incremental backups of a filesystem, especially dealing with files that are deleted or renamed, are not feasible with tar. In any event, for your particular named applications, it is certainly possible to do better, but it will require you study your setup carefully. It may also be more painful if you haven't got things set up right. For instance, take a machine that is just a MIT KDC. You can probably divide every file up on the machine into 1 of 3 cases: unchanging system stuff, identical between all KDCs local configuration information and minor secrets master key - the big secret Kerberos database log files that change, but that you might not miss. For the unchanging system stuff, one ufs dump on a CDrom would be keen. You might be able to get this to just "install a fresh copy of the OS with package X+Y+Z"...if you're very brave. The local configuration stuff *could* be as simple as notes on paper, or as complicated as an incremental ufs filesystem dump. It might also include per-machine keytabs and private keys if they can be easily regenerated if the machine is reloaded from scratch. Once you figure out and test whatever strategy you use, you're basically done here, unless you are worried about tracking any updates you make later to the OS software or configuration (which could be an issue.) The master key & kerberos db are the two really special cases. The kerberos db proper changes. So you'll want to make regular copies of that. You may be able to use a variation of the regular stock MIT kerberos replication stuff to get a copy of the database. It's then a regular flat file, with encrypted keys. Since the actual keys are encrypted, if you feeling brave, you could ship your backup over the network and back it up remotely. Anyone sniffing the wire could acquire principal names and other stuff, but they can probably guess that anyways just by watching traffic to the kdc. The master key does not change. Especially if you are shipping your database over the wire, you'll want to treat the master key as a state secret. You'll want to be *very* sure you don't lose your master key, because if you do, all your kdc backups will be useless. If you are doing a ufs dump of things, it would be a big advantage to try to segregate the above classes of things -- if your OS lives on one partition, and *just* your kdb & master key live on their own partition, then a regular ufs dump won't traverse the mount point and backup the kdb/master key. If you worked hard at it, you might be able to mount most of your OS read-only, or even boot off CD-rom. The other things you mention are probably much like this. OpenLDAP has its own replication logic, which can do either incremental changes or the whole thing. You'll probably always want to capture the entire database. How secret your backup of this needs to be depends on your threat model. It probably doesn't need to be a state secret, but you might want to treat it as such anyways. For OpenSSL, well, presumably you have certificates and public/private key pairs. The private keys should be treated as state secrets. The public keys & certificates could be posted on the web, you only care about tamper resistance for backups of that, not privacy. For all of these things, I doubt anyone else will be able to predict exactly what filenames you'll have used for everything. For instance, on a test kdc, I have these k5 related files: unchanging system files: /usr/k5/bin/* /usr/k5/sbin/* /usr/k5/include/* /usr/k5/lib/* /usr/k5/man/* /usr/k5/share/* per-machine configuration: /usr/k5/etc/* /usr/k5/db/kadm5.acl /usr/k5/db/kadm5.keytab k5 database: /usr/k5/db/principal* master key /usr/k5/db/.k5.UNIQ.UMICH.EDU log files: /usr/k5/log/* I think it's very likely your filenames are different. Probably "out of the box", these path names will be /usr/local/krb5/* and /var/krb5kdc. I think. If you're doing replication, you'll have a few more files to deal with that. Possibly some bits are even more scattered; you may have a krb5.conf or keytab in /etc, for instance. Some people do backups based on scripts fired off from crontab. This usually works best when doing incremental backups to disk or the network, but it can also work with tape robots, or simply a very long tape and a somewhat attentive operator. Again, for all of these things, you should *really* try them out in advance. You won't know that you failed to back up the one crucial thing without which everything else is useless, until after the disk catastrophe. You probably also won't want to be reading man pages to figure out how to use restore, 3 hours after your bedtime. One minor note: some cheap tape drives don't do any form of read verification. When the heads get dirty, they can produce useless backups with no warning whatsoever. Avoid, if possible. Work out a way to test tape data integrity, otherwise. Never trust any salesperson who claims read after write is unnecessary. High quality tape drives do not have this problem. Other forms of backup are not necessarily better or worse -- CD-RWs aren't always readable on all drives, some brands behave differently, 1 in a hundred floppies isn't any good, etc. -Marcus Watts UM ITCS Umich Systems Group