> Until we get a better backup strategy, I'm backing up the > workstations to the server via mounting a shared samba drive to /mnt. > > Trying tar cvf /mnt/samba_share/backup.tar /* eventually > yields backing up /mnt, which produces an unwanted loop, > including /mnt/samba_share
Actually, there are more than one thing wrong with this. You can't tar the tarfile (the loop problem you've already discovered...) but also you don't want to tar /proc, or /dev, and probably some other stuff. Both of which *might* do something intelligent, but you wouldn't want to restore those items from tarfile later. I am going to suggest dump. If you dump /, then it will get all the files on that filesystem. It will not dump anything that's a sub-mount, for example /mnt, or if you have partitioned your disk, it might skip things like /usr or /var or /home or /boot. For those, you would need to run another dump, to backup THAT file system. The reason I suggest dump instead of tar, rsync, etc, is that dump is part of the ext2 / ext3 source code. There is nobody who knows the filesystem better than those guys, so dump is more intelligent when it comes to things like ... Character special devices, symlinks, hardlinks, etc. I think you can only count on tar/rsync/etc to reliably backup files and directories. Even then, you probably have to give special options (such as -p for tar) to preserve information that otherwise might get lost. (-p for tar preserves file ownership and permissions.) I don't believe there is any option to dump, to make it list the files that it's dumping during the dump. Instead, what I always do is to dump a filesystem, and then use restore -tf on the results to list its contents. _______________________________________________ bblisa mailing list [email protected] http://www.bblisa.org/mailman/listinfo/bblisa
