I presently backup 1Tb of files via rsync cron job.  
/usr/bin/time /usr/bin/rsync -a --delete /from/dir/ /destination/dir
 
 
 
From: [email protected] [mailto:[email protected]] On Behalf Of 
Richard 'Doc' Kinne
Sent: Thursday, January 08, 2009 4:07 PM
To: [email protected]
Subject: [BBLISA] System Backup thoughts and questions...
 
Hi Folks:
 
I'm looking at backups - simple backups right now.
 
We have a strategy where an old computer is mounted with a large external, 
removable hard drive. Directories - large directories - that we have on our 
other production servers are mounted on this small computer via NFS. A cron job 
then does a simple "cp" from the NFS mounted production drive partitions to to 
the large, external, removable hard drive.
 
I thought it was an elegant solution, myself, except for one small, niggling 
detail.
 
It doesn't work.
 
The process doesn't copy all the files. Oh, we're not having a problem with 
file locks, no. When you do a "du -sh <directory>" comparison between the 
/scsi/web directory on the backup drive and the production /scsi/web directory 
the differences measure in the GB. For example my production /scsi partition 
has 62GB on it. The most recently done backup has 42GB on it!
 
What our research found is that the cp command apparently has a limit of 
copying 250,000 inodes. I have image directories on the webserver that have 
114,000 files so this is the limit I think I'm running into.
 
While I'm looking at solutions like Bacula and Amanda, etc., I'm wondering if 
RSYNCing the files may work.  Or will I run into the same limitation?
 
Any thoughts?
---
Richard 'Doc' Kinne, [KQR]
American Association of Variable Star Observers
<rkinne @ aavso.org>
 
 
 
_______________________________________________
bblisa mailing list
[email protected]
http://www.bblisa.org/mailman/listinfo/bblisa

Reply via email to