Hi,
I used to do similar kinds of backups on our smallish clusters,
but recently decided to do something slightly smarter, and have
been using rsnapshot to do backups since. It uses rsync and hard
links to make snapshots of /home (or any filesystem you want)
without replicating every single byte each time, that is, it only
saves changes to the file system. So after the first time you
run it, only a relatively small amount of backup traffic is
necessary to get a coherent snapshot of the whole thing. I set
up a seperate cheap box with a couple large drives and three
gig-ethernet cards, and each one plugs into a switch for one of
our clusters. Now /home directories for all three clusters are
all backed up nightly with very little network overhead and no
intervention. It has been running without a hitch, and it is
easy to add less frequent backups for /usr/local or the like that
I'd hate to lose. Definitely worth the effort in my case! And
it is trivial to export the snapshot directories (read-only of
course) back to the clusters as needed for recovery purposes.
- John
On Fri, 16 Feb 2007, Nathan Moore wrote:
Hello all,
I have a small beowulf cluster of Scientific Linux 4.4 machines with common
NIS logins and NFS shared home directories. In the short term, I'd rather
not buy a tape drive for backups. Instead, I've got a jury-rigged backup
scheme. The node that serves the home directories via NFS runs a nightly tar
job (through cron),
[EMAIL PROTECTED]> tar cf home_backup.tar ./home
[EMAIL PROTECTED]> mv home_backup.tar /data/backups/
where /data/backups is a folder that's shared (via NFS) across the cluster.
The actual backup then occurs when the other machines in the cluster (via
cron) copy home_backup.tar to a private (root-access-only) local directory.
[EMAIL PROTECTED]> cp /mnt/server-data/backups/home_backup.tar
/private_data/
where "/mnt/server-data/backups/" is where the server's "/data/backups/" is
mounted, and where /private_data/ is a folder on client's local disk.
Here's the problem I'm seeing with this scheme. users on my cluster have
quite a bit of stuff stored in their home directories, and home_backup.tar is
large (~4GB). When I try the cp command on client, only 142MB of the 4.2GB
is copied over (this is repeatable - not a random error, and always about
142MB). The cp command doesn't fail, rather, it quits quietly. Why would
only some of the file be copied over? Is there a limit on the size of files
which can be transferred via NFS? There's certainly sufficient space on disk
for the backups (both client's and server's disks are 300GB SATA drives,
formatted to ext3)
I'm using the standard NFS that's available in SL43, config is basically
default.
regards,
Nathan Moore
- - - - - - - - - - - - - - - - - - - - - - -
Nathan Moore
Physics, Pasteur 152
Winona State University
[EMAIL PROTECTED]
AIM:nmoorewsu
- - - - - - - - - - - - - - - - - - - - - - -
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf