I think there are probably as many answers to this question as there are members of this list, but I have found tar to be a simple and effective solution for this sort of problem, although I can't say I've tried it on anything approaching that number of files:

tar cf - /source/directory | ( cd /backup/directory ; tar xvf - )

Looking forward to the discussion thread,
Dave


On Thu, 8 Jan 2009, Richard 'Doc' Kinne wrote:

Hi Folks:

I'm looking at backups - simple backups right now.

We have a strategy where an old computer is mounted with a large external, removable hard drive. Directories - large directories - that we have on our other production servers are mounted on this small computer via NFS. A cron job then does a simple "cp" from the NFS mounted production drive partitions to to the large, external, removable hard drive.

I thought it was an elegant solution, myself, except for one small, niggling detail.

It doesn't work.

The process doesn't copy all the files. Oh, we're not having a problem with file locks, no. When you do a "du -sh <directory>" comparison between the /scsi/web directory on the backup drive and the production /scsi/web directory the differences measure in the GB. For example my production /scsi partition has 62GB on it. The most recently done backup has 42GB on it!

What our research found is that the cp command apparently has a limit of copying 250,000 inodes. I have image directories on the webserver that have 114,000 files so this is the limit I think I'm running into.

While I'm looking at solutions like Bacula and Amanda, etc., I'm wondering if RSYNCing the files may work. Or will I run into the same limitation?

Any thoughts?
---
Richard 'Doc' Kinne, [KQR]
American Association of Variable Star Observers
<rkinne @ aavso.org>




_______________________________________________
bblisa mailing list
[email protected]
http://www.bblisa.org/mailman/listinfo/bblisa

Reply via email to