Daniel Pittman wrote:
david <[email protected]> writes:
I've got the following:
2 x servers - single small hard drives in each
1 x desktop - four hard drives including one removeable drive in a caddy
intended solely for back up purposes.
I run Mondo on the two servers periodically with the intention of
being able to do a disaster [1] recovery quickly. Mondo produces 2 DVD
images for each server. I run rsync nightly (good enough for my
purposes) for more volatile data such as email, databases
etc. Everything is very tidy.
The desktop has about 350G of data and software. The software is
unbelievably complicated because I use it to test server set-ups and
odd bits of software etc. In other words, it's a dog's breakfast.
I would like to run Mondo or something similar on this machine too,
but I fear it would not be practical. At the moment I run rsync for
the most obvious data, but that doesn't help with all the complicated
software, and I would like to be able to recover that too in the event
of disaster [1].
What's the current best practice for back up in this kind of
situation?
It varies. Personally, I take advantage of the fact that a Linux system
has no magic "metadata", so a copy of all the files is enough to perform
a bare-metal restore.
So this suggests to me that I could make a <# cp -a> of my root/boot drive onto
an empty drive which I then remove and take off-site, rsync'ing it periodically?
Or is it necessary to use dd? Where does the MBR fit into this?
The problem with any back up system is that normally you only find out that it
works for sure when you really *need* it to work.
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html