Re: Backup advice

2007-05-24 Thread Jason Lixfeld


On 24-May-07, at 12:33 AM, Doug Hardie wrote:



On May 23, 2007, at 19:03, Jason Lixfeld wrote:



On 23-May-07, at 9:23 PM, Doug Hardie wrote:

The criteria for selecting a backup approach is not the backup  
methodology but the restore methodology.


Excellent point.

Perhaps I'm asking the wrong question, so let me try it this way  
instead:


I'm looking for a backup solution that I can rely on in the event  
I have a catastrophic server failure.  Ideally this backup would  
look and act much like a clone of the production system.  In the  
worse case, I'd re-format the server array and copy the clone back  
to the server, setup the boot blocks, and that would be it.


Ideally this clone should be verifiable, meaning I should be able  
to verify it's integrity so that it's not going to let me down if  
I need it.


I'm thinking external USB hard drive of at least equal size to the  
server array size as far as hardware goes, but I'm lost as far as  
software goes.


What kind of data are you backing up?  If you are backing up the  
system and your data then you have to be very careful about links.   
Some backup solutions will copy the files as separate files.  When  
you restore the link is gone.  An update to one of the linked files  
will no longer be seen by the other names.  The OS uses a lot of  
links.  If all you are backing up is data, its probably not an  
issue.  I have used both dump and tar successfully.  I currently  
use tar as I have many directories I don't want to backup.  Tar  
requires some care and feeding to handle links properly.  It  
doesn't do it by default.  Dump does handle them properly by  
default.  Another option is rsync.  The advantage it has is that it  
only copies the changes in the file.  It will run a lot faster than  
dump or tar which will copy everything each time.  You do have to  
be careful with /dev if you are copying the root partition.


I'm backing up my entire system.  To me, it's easier this way in the  
long run.  In the event of a failure, you just copy everything from  
the backup back to the system without the need to worry about re- 
installing applications, library dependencies, configuration files,  
nuances, idiosyncrasies, etc.  I've been doing this for years with my  
OS X laptop and it's the quickest way to get back on your feet in a  
worst case scenario.


Dump seems to be the best at doing what I'm looking to do.  Better  
than tar or rsync.  I think dd would beat out dump, but dd is far  
less of a backup tool than dump is, so I think dump is still the  
winner.  The caveat of a full dump taking the most time and resources  
can be reasonably mitigated by doing a full dump every X intervals  
and incremental in between.  It seems to be a fair compromise seeing  
as how cheap hard drive space is these days.


2 x system space would be enough for a full dump plus plenty of  
increments, I'd say.  No?  Is there a rule of thumb?  3x?  4x?


As far as restoring goes, let's assume my machine blew up one full  
backup and 15 increments ago and I want to restore the entire system  
in it's entirety from my backup.  How is that done?  Point restore to  
the last incremental and it figures it out for itself, or is it a  
manual process where I have to figure out what backups consist of the  
complete system?


One backup disk is not all that great a safety approach.  You will  
never know if that drive has failed till you try and use it.  Then  
its too late.  Failures do not require that the drive hardware has  
failed.  Any interruption in the copy can cause an issue that may  
not be detected during the backup.  Sectors generally don't just go  
bad sitting on the shelf, but it does happen.  That was a  
significant problem with tapes.  Generally 25% of the tapes I used  
to get back from off-site storage after a month were no longer  
readable.


There has to be some way for the OS to know if a drive is bad, or to  
verify the state of the data that was just copied from one location  
to another.  Is there no method of doing error correction?  My laptop  
backup programs I've been using for years shows me information at the  
end of the run:  Files copied, Speed, Time, Errors, etc.


If a UNIX backup process is as unreliable as you're making it out to  
be, then I could buy 10 drives and still potentially have each one  
fail and be screwed if I were to need to rely on it at some point.


I'd feel more comfortable backing up off a RAID1 to a single backup  
drive that provided some sort of error protection/correction/ 
notification than backing up off a RAID1 to 100 backup drives that  
didn't give me any indication as to the success of the copy.


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Backup advice

2007-05-24 Thread Olivier Nicole
 2 x system space would be enough for a full dump plus plenty of  
 increments, I'd say.  No?  Is there a rule of thumb?  3x?  4x?

That depends how much your file system change. If every ficle change
befor the incremental run, dump 1 will be equal to dump 2, and 2x will
be enough for just dump0 and dump 1.

There is no rule.

Olivier
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Backup advice

2007-05-24 Thread Jason Lixfeld


On 24-May-07, at 3:16 AM, Olivier Nicole wrote:


2 x system space would be enough for a full dump plus plenty of
increments, I'd say.  No?  Is there a rule of thumb?  3x?  4x?


That depends how much your file system change. If every ficle change
befor the incremental run, dump 1 will be equal to dump 2, and 2x will
be enough for just dump0 and dump 1.

There is no rule.


How would one go about gauging their system for the number of file  
system changes to determine a suitable amount of backup space?



Olivier


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Backup advice

2007-05-24 Thread Jason Lixfeld


On 24-May-07, at 3:43 AM, Doug Hardie wrote:

Rsync will leave you with a duplicate of the drive.  You could  
pretty much boot off it and run.  You would need to configure the  
drive and install a boot loader though.


The boot off and run is more in-line with what I want to do, so I  
will go the rsync route instead of the dump/restore route.


Thanks for your feedback, Doug.  It's been a great help.


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Backup advice

2007-05-24 Thread Jerry McAllister
On Wed, May 23, 2007 at 07:27:05PM -0400, Jason Lixfeld wrote:

 So I feel a need to start backing up my servers.  To that end, I've  
 decided that it's easier for me to grab an external USB drive instead  
 of a tape.  It would seem dump/restore are the tools of choice.  My  
 backup strategy is pretty much I don't want to be screwed if my RAID  
 goes away.  That said I have a few questions along those lines:

A popular sentiment.

 - Most articles I've read suggest a full backup, followed by  
 incremental backups.  Is there any real reason to adopt that format  
 for a backup strategy like mine, or is it reasonable to just do a  
 dump 0 nightly?  I think the only reason to do just one full backup  
 per 'cycle' would be to preserve system resources, as I'm sure it's  
 fairly taxing on the system during dump 0 times.

Yes, dump/restore is generally the way to go, unless you have not
set up your partitions conveniently to separate what you want to dump
from what you do not want to dump.

The main reason to do a full dump followed by a series of incrementals
is to save resources.   This includes dump time as well as media to
receive the dump[s].   If you happen to be using tape for example, a
large full dump may take several tapes for each dump, but an incremental 
may then take only one for each.

There is one more thing to consider.   The way dump works is that it
starts by making a large list of all the stuff it will dump.  Then it
starts writing to media (tape, disk file, network, whatever).  On systems
where files change frequently, especially new ones being added and old
ones being deleted, it is quite possible, even probable that there will
be changes between the time the index list is made and when the dump
of a particular file/directory is written.   dump and restore handle
this with now problem and just a little warning message, but it makes
the backup a little less meaningful.   You will often see messages
from restore saying it is skipping a file it cannot find.  That is
because the file was deleted from disk after the list was made, but
before the data was written to media.   Files created after the list
was made will not be dumped until the next time dump is run.  Files
that are modified after the list was made will only be dumped if they
were also modified before the list was made.

That said, if the amount I am backing up takes less than about
an hour for a level 0 and I have room for it, I always do the full 
dump each time and ignore the incremental issue.In cases where
the full dump takes a long time, but there are typically not a lot of
changes on the system, I usually do a level 0, followed only by
a series of level 1 dumps until they tend to get large and then start
another level 0 dump.

 - Can dump incrementally update an existing dump, or is the idea that  
 a dump is a closed file and nothing except restore should ever touch it?

No, dump does not work that way.   It works on complete files.
It keeps a record of when the most recent dumps were done along with
the level of the dump that was done - in a file called /etc/dumpdates.   
Then, when it makes its list of files and directories to dump, it looks
at the date the file was changed.   If the change was more recent than
the next lower dump level than currently being done, it adds the file
to the list and dumps it to the incremental media.   Full dumps just
set the date of most recent dump to the epoch (1970) so any file
or directory changed since then is dumped.   Since that is the 
nominal beginning of time for UNIX of any time, all files will be
changed since then and thus be added to the list to be dumped.  

So, essentially, yes to the second part of the question.  A dump file
might as well be considered a closed file.  Incrementals are additional
closed files. 

 
 - How much does running a backup through gzip actually save?  Is  
 taxing the system to compress the dump and the extra time it takes  
 actually worth it, assuming I have enough space on my backup drive to  
 support a dump 0 or two?

As with other data, it depends on the data.   I never compress dumps.
Maybe I am a little supersticious, but I don't want any other
complication potentially in the way under the circumstance when I
find I need something from the dump.Also, you would have to
uncompress the dump before you could do an 'interactive' restore or
any other partial restore.

jerry

 
 - Other folks dumping to a hard drive at night?  Care to share any of  
 your experiences/rationale?
 
 Thanks in advance.
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to [EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Backup advice

2007-05-24 Thread Jerry McAllister
On Wed, May 23, 2007 at 10:03:40PM -0400, Jason Lixfeld wrote:

 
 On 23-May-07, at 9:23 PM, Doug Hardie wrote:
 
 The criteria for selecting a backup approach is not the backup  
 methodology but the restore methodology.
 
 Excellent point.
 
 Perhaps I'm asking the wrong question, so let me try it this way  
 instead:
 
 I'm looking for a backup solution that I can rely on in the event I  
 have a catastrophic server failure.  Ideally this backup would look  
 and act much like a clone of the production system.  In the worse  
 case, I'd re-format the server array and copy the clone back to the  
 server, setup the boot blocks, and that would be it.
 
 Ideally this clone should be verifiable, meaning I should be able to  
 verify it's integrity so that it's not going to let me down if I need  
 it.
 
 I'm thinking external USB hard drive of at least equal size to the  
 server array size as far as hardware goes, but I'm lost as far as  
 software goes.

Sounds like you are not quite as critical as the other post - somewhere
in between.   

If you want an immediately available clone, then the best thing is
to have an identical machine, preferably off-site and maintain it
as a clone, probably using rsync, although you can reasonably use
dump/restore for that too.  

If you need calls for just being back up in a reasonable length of time
then you might prefer dumping to some media and if the need comes to
restore, then you would have to recreate the disk structure - using
fdisk, bsdlabel and newfs from the fixit image on the install CD or
use sysinstall to run them for you.  (I suggest that any serious
System Manager become familiar with fdisk, bsdlabel and newfs, even
if you usually let sysinstall handle them for you)  Then you would
use restore to pull the dumps back in.

If your system is super critical as Doug Hardie posted about his,
then you may want to use some combination of rsync-ing to a close
and making dumps and consider storing some of these off-site.

jerry

 
 Any advice appreciated.
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to [EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Backup advice

2007-05-24 Thread Jerry McAllister
On Thu, May 24, 2007 at 03:10:43AM -0400, Jason Lixfeld wrote:

 
 On 24-May-07, at 12:33 AM, Doug Hardie wrote:
 
 
 On May 23, 2007, at 19:03, Jason Lixfeld wrote:
 
 
 On 23-May-07, at 9:23 PM, Doug Hardie wrote:
 
 The criteria for selecting a backup approach is not the backup  
 methodology but the restore methodology.
 
 Excellent point.
 
 Perhaps I'm asking the wrong question, so let me try it this way  
 instead:
 
 software goes.
 
 What kind of data are you backing up?  If you are backing up the  
 system and your data then you have to be very careful about links.   
 Some backup solutions will copy the files as separate files.  When  
 you restore the link is gone.  An update to one of the linked files  
 will no longer be seen by the other names.  The OS uses a lot of  
 links.  If all you are backing up is data, its probably not an  
 issue.  

Yes, I neglected to mention the issue of veracity of the backups.
dump/restore is the only one that completely handles the hard links
the way you want.   It may also be the only one that handles ACLs
properly if you use those.   I haven't examined that issue.

 
 Dump seems to be the best at doing what I'm looking to do.  Better  
 than tar or rsync.  I think dd would beat out dump, but dd is far  
 less of a backup tool than dump is, so I think dump is still the  
 winner.  The caveat of a full dump taking the most time and resources  
 can be reasonably mitigated by doing a full dump every X intervals  
 and incremental in between.  It seems to be a fair compromise seeing  
 as how cheap hard drive space is these days.

Note that dd is not really a backup utility.  It is a data copy utility.
If you have a catastrophic failure on a disk and need to replace it,
there is every likelihood that the new drive will NOT be exactly 
like the old one.   Doing a disk build with fdisk, bsdlabel and newfs
and restoring from dumps would get you exactly what you want.  But,
using dd would not.  You would have an exact copy of the old boot
sectors, MBR, partition tables which would not be correct for the
new drive (although they might work, sort of).

 
 2 x system space would be enough for a full dump plus plenty of  
 increments, I'd say.  No?  Is there a rule of thumb?  3x?  4x?

Depends a lot on how much your data changes.   In your case, that
would include log files, since you intend to back up the whole 
system.Other than log files, the system itself will not change
a lot.   But, I have no idea of what your data does.   I would
feel comfortable with 2X for my sort of stuff and be able to
do a full, plus maybe half a dozen incrementals or so.  But even 4X
might not cover it for some volatile systems.

 As far as restoring goes, let's assume my machine blew up one full  
 backup and 15 increments ago and I want to restore the entire system  
 in it's entirety from my backup.  How is that done?  Point restore to  
 the last incremental and it figures it out for itself, or is it a  
 manual process where I have to figure out what backups consist of the  
 complete system?

No, you first restore the full dump and continue through the 
incrementals in order of increasing level.   If you made more
than one incremental at a specific level, then only restore from
the last one made.

 
 One backup disk is not all that great a safety approach.  You will  
 never know if that drive has failed till you try and use it.  Then  
 its too late.  Failures do not require that the drive hardware has  
 failed.  Any interruption in the copy can cause an issue that may  
 not be detected during the backup.  Sectors generally don't just go  
 bad sitting on the shelf, but it does happen.  That was a  
 significant problem with tapes.  Generally 25% of the tapes I used  
 to get back from off-site storage after a month were no longer  
 readable.
 
 There has to be some way for the OS to know if a drive is bad, or to  
 verify the state of the data that was just copied from one location  
 to another.  Is there no method of doing error correction?  My laptop  
 backup programs I've been using for years shows me information at the  
 end of the run:  Files copied, Speed, Time, Errors, etc.

The OS does see read/write errors on a disk and reports them.

dump will tell you if it thinks there was a media error, but
that doesn't tell you much - and probably doesn't really on
your laptop.  It is probably a false sense of security.

There used to be a verify option on dump, or maybe it was in some
other proprietary version of UNIX I worked on.   But it made dumps
take so long that we quickly gave up using it.   It required reading
back the media and comparing it to the original.   Then the verify
often failed because a file was changed or deleted between the
time it was written and the time it was verified.  So, the verify
was not useful.   
 
 If a UNIX backup process is as unreliable as you're making it out to  
 be, then I could buy 10 drives and still potentially have each one  
 

Re: Backup advice

2007-05-24 Thread Jerry McAllister
On Thu, May 24, 2007 at 03:20:28AM -0400, Jason Lixfeld wrote:

 
 On 24-May-07, at 3:16 AM, Olivier Nicole wrote:
 
 2 x system space would be enough for a full dump plus plenty of
 increments, I'd say.  No?  Is there a rule of thumb?  3x?  4x?
 
 That depends how much your file system change. If every ficle change
 befor the incremental run, dump 1 will be equal to dump 2, and 2x will
 be enough for just dump0 and dump 1.
 
 There is no rule.
 
 How would one go about gauging their system for the number of file  
 system changes to determine a suitable amount of backup space?

To some extent, you must know how you use the system.
After that, it is just a matter of experience with that system.
After you have done this dump cycle a few times you will begin
to see a pattern.

jerry
 
 Olivier
 
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to [EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Backup advice

2007-05-24 Thread Roland Smith
On Wed, May 23, 2007 at 07:27:05PM -0400, Jason Lixfeld wrote:
  So I feel a need to start backing up my servers.  To that end, I've decided 
  that it's easier for me to grab an external USB drive instead of a tape.  It 

Buy at least two, and keep one off-site.

  would seem dump/restore are the tools of choice.  My backup strategy is 
  pretty much I don't want to be screwed if my RAID goes away.  That said I 
  have a few questions along those lines:
 
  - Most articles I've read suggest a full backup, followed by incremental 
  backups.  Is there any real reason to adopt that format for a backup 
  strategy like mine, or is it reasonable to just do a dump 0 nightly?  I 
  think the only reason to do just one full backup per 'cycle' would be to 
  preserve system resources, as I'm sure it's fairly taxing on the system 
  during dump 0 times.

Depending on the size of your data, a level 0 dump could take a couple
of hours. Unless you have a terabyte raid array, in which case a single
USB disk probably won't cut it. :)

On the other hand, if your dataset changes rapidly you might not save
much with incremental dumps.

You can save time by setting the nodump flag on directories that contain
files that you don't really nead or can easily replace, such as
/usr/obj, /usr/ports/distfiles, /tmp et cetera.

  - Can dump incrementally update an existing dump, or is the idea that a dump 
  is a closed file and nothing except restore should ever touch it?

You cannot update a dump file, AFAIK.

  - How much does running a backup through gzip actually save?  Is taxing the 
  system to compress the dump and the extra time it takes actually worth it, 
  assuming I have enough space on my backup drive to support a dump 0 or two?

It depends. On a normal filesystem you save about 50% with gzip. But if you have
lots of (already compressed) audio and picture data there are almost no savings.
Compressing with gzip shouldn't tax the system too much, unless it's very
old. Using bzip2 usually isn't worth it. It takes much longer and maxes
out the CPU on my 2,4 GHz athlon64.

Do not forget the -L flag if you're dumping a live filesystem!

  - Other folks dumping to a hard drive at night?  Care to share any of your 
  experiences/rationale?

My desktop machine's file systems are backed up every week to a USB
drive, using gzipped dumps. Every month I start with a new level 0
dump. When I run out of space I delete the oldest set of dumps.

When I nuked my /usr parition by accident I was very happy to be able to
restore things with the tools in /rescue, without first having to
rebuild a lot of ports.

Roland
-- 
R.F.Smith   http://www.xs4all.nl/~rsmith/
[plain text _non-HTML_ PGP/GnuPG encrypted/signed email much appreciated]
pgp: 1A2B 477F 9970 BA3C 2914  B7CE 1277 EFB0 C321 A725 (KeyID: C321A725)


pgpOzVOMwrG8f.pgp
Description: PGP signature


Backup advice

2007-05-23 Thread Jason Lixfeld
So I feel a need to start backing up my servers.  To that end, I've  
decided that it's easier for me to grab an external USB drive instead  
of a tape.  It would seem dump/restore are the tools of choice.  My  
backup strategy is pretty much I don't want to be screwed if my RAID  
goes away.  That said I have a few questions along those lines:


- Most articles I've read suggest a full backup, followed by  
incremental backups.  Is there any real reason to adopt that format  
for a backup strategy like mine, or is it reasonable to just do a  
dump 0 nightly?  I think the only reason to do just one full backup  
per 'cycle' would be to preserve system resources, as I'm sure it's  
fairly taxing on the system during dump 0 times.


- Can dump incrementally update an existing dump, or is the idea that  
a dump is a closed file and nothing except restore should ever touch it?


- How much does running a backup through gzip actually save?  Is  
taxing the system to compress the dump and the extra time it takes  
actually worth it, assuming I have enough space on my backup drive to  
support a dump 0 or two?


- Other folks dumping to a hard drive at night?  Care to share any of  
your experiences/rationale?


Thanks in advance.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Backup advice

2007-05-23 Thread Howard Goldstein

Jason Lixfeld wrote:
- Other folks dumping to a hard drive at night?  Care to share any of 
your experiences/rationale?


Not with dump/restore.  After using amanda and a tape drive for eons I'm 
now happy with a bacula solution to backup 2 freebsds and a windows 
machine.  It does incrementals except on Saturday night when it 
alternates between a differential (sort of a mass incremental from the 
last full) and a full backup to a cheap IDE drive.  Every Sunday I copy 
the IDE drive to a USB drive and take it offsite and bring back another 
one.


After restoring from scratch - power supply frying the entire RAID array 
 on my desktop -STABLE machine - I think the advantages of dump are 
certainly there but for my apps, where I don't have any huge sparse 
files or a lot of hard links other than whatever gets installed with a 
fresh install (if anything) to worry about - they're outweighed by the 
convenience of bacula where I can go back to a point in time.  YMMV...


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Backup advice

2007-05-23 Thread David N

On 24/05/07, Howard Goldstein [EMAIL PROTECTED] wrote:

Jason Lixfeld wrote:
 - Other folks dumping to a hard drive at night?  Care to share any of
 your experiences/rationale?

Not with dump/restore.  After using amanda and a tape drive for eons I'm
now happy with a bacula solution to backup 2 freebsds and a windows
machine.  It does incrementals except on Saturday night when it
alternates between a differential (sort of a mass incremental from the
last full) and a full backup to a cheap IDE drive.  Every Sunday I copy
the IDE drive to a USB drive and take it offsite and bring back another
one.

After restoring from scratch - power supply frying the entire RAID array
  on my desktop -STABLE machine - I think the advantages of dump are
certainly there but for my apps, where I don't have any huge sparse
files or a lot of hard links other than whatever gets installed with a
fresh install (if anything) to worry about - they're outweighed by the
convenience of bacula where I can go back to a point in time.  YMMV...

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]



We have something similar, we use rsnapshot (/usr/ports/sysutils/rsnapshot/)
http://www.rsnapshot.org/

The first rsnapshot takes up the full amount of space and rsnapshot
there after only takes up the space of those files that have changed.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Backup advice

2007-05-23 Thread Doug Hardie


On May 23, 2007, at 16:27, Jason Lixfeld wrote:

So I feel a need to start backing up my servers.  To that end, I've  
decided that it's easier for me to grab an external USB drive  
instead of a tape.  It would seem dump/restore are the tools of  
choice.  My backup strategy is pretty much I don't want to be  
screwed if my RAID goes away.  That said I have a few questions  
along those lines:


- Most articles I've read suggest a full backup, followed by  
incremental backups.  Is there any real reason to adopt that format  
for a backup strategy like mine, or is it reasonable to just do a  
dump 0 nightly?  I think the only reason to do just one full backup  
per 'cycle' would be to preserve system resources, as I'm sure it's  
fairly taxing on the system during dump 0 times.


- Can dump incrementally update an existing dump, or is the idea  
that a dump is a closed file and nothing except restore should ever  
touch it?


- How much does running a backup through gzip actually save?  Is  
taxing the system to compress the dump and the extra time it takes  
actually worth it, assuming I have enough space on my backup drive  
to support a dump 0 or two?


- Other folks dumping to a hard drive at night?  Care to share any  
of your experiences/rationale?


The criteria for selecting a backup approach is not the backup  
methodology but the restore methodology.  What failures can you  
tollerate and what can you not afford to lose forever.  Backup to a  
single disk leaves you with a big vulnerability if something is wrong  
with that backup.  You stand to lose pretty much everything.  If  
everything is stored in one location, what happens if it vanishes?


My approach is dictated by the restore requirements.  We have some  
databases that are absolutely critical.  Loss of those is the end of  
the world.  Every module that updates the database also writes a copy  
of the updated transaction to a log file.  I rsync the log file to  
multiple machines separated by many miles every 10 minutes.  The  
complete database is dumped everynight and that is also rsync'd to  
the same machines daily.  Each of the backup machines retains several  
months of the full dumps and the transaction logs.  From that  
presuming that one site remains available, I can reconstruct all but  
the last 10 minutes of the database.


The complete system is dumped to a disk on one of the local servers  
weekly.  A DVD is cut from that and taken off-site for retention.   
Actually all that is needed is the local software source and the  
config files as FreeBSD is easily replaced.  However, since there are  
always times were some of the ports may not be the latest version it  
is easier to have the actual ones in use rather than having to  
checkout newer versions.  The full dump is also rsync'd weekly to a  
couple of off-site machines.


Whatever you decide to do, figure out how to recover and test it.   
Finding out you need something you didn't save is much less traumatic  
if you find out before a failure occurs.  I test my restore  
procedures yearly.  I have two machines I use at home for doing test  
recoveries. 
 
___

freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Backup advice

2007-05-23 Thread Jason Lixfeld


On 23-May-07, at 9:23 PM, Doug Hardie wrote:

The criteria for selecting a backup approach is not the backup  
methodology but the restore methodology.


Excellent point.

Perhaps I'm asking the wrong question, so let me try it this way  
instead:


I'm looking for a backup solution that I can rely on in the event I  
have a catastrophic server failure.  Ideally this backup would look  
and act much like a clone of the production system.  In the worse  
case, I'd re-format the server array and copy the clone back to the  
server, setup the boot blocks, and that would be it.


Ideally this clone should be verifiable, meaning I should be able to  
verify it's integrity so that it's not going to let me down if I need  
it.


I'm thinking external USB hard drive of at least equal size to the  
server array size as far as hardware goes, but I'm lost as far as  
software goes.


Any advice appreciated.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Backup advice

2007-05-23 Thread Jonathan Horne
On Wednesday 23 May 2007 21:03:40 Jason Lixfeld wrote:
 On 23-May-07, at 9:23 PM, Doug Hardie wrote:
  The criteria for selecting a backup approach is not the backup
  methodology but the restore methodology.

 Excellent point.

 Perhaps I'm asking the wrong question, so let me try it this way
 instead:

 I'm looking for a backup solution that I can rely on in the event I
 have a catastrophic server failure.  Ideally this backup would look
 and act much like a clone of the production system.  In the worse
 case, I'd re-format the server array and copy the clone back to the
 server, setup the boot blocks, and that would be it.

 Ideally this clone should be verifiable, meaning I should be able to
 verify it's integrity so that it's not going to let me down if I need
 it.

 I'm thinking external USB hard drive of at least equal size to the
 server array size as far as hardware goes, but I'm lost as far as
 software goes.

 Any advice appreciated.
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to
 [EMAIL PROTECTED]

in our enterprise, we use Veritas NetBackup.  expensive, but has been 100% 
reliable for us for years.  its unix agent is currently being used on 
everything from linux, to osx and freebsd.

unfortunatly, this one doesnt fall under the free of cost category, but 
veritas tech support that comes with it has always been top notch.  just 
thought id throw that out there, incase this is for work and there is a 
budget line item to take care of this project :)
-- 
Jonathan Horne
http://dfwlpiki.dfwlp.org
[EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Backup advice

2007-05-23 Thread Olivier Nicole
Hi,

 So I feel a need to start backing up my servers.  To that end, I've  
 decided that it's easier for me to grab an external USB drive instead  
 of a tape.  

It's certainly faster, easier and cheaper now days.

 It would seem dump/restore are the tools of choice.

This really repends of what you are backuping. Dump is OS
specific. Ifwhat really, really matters is the data, notthe OS, not
the software, tar may be more generic and could even be recovered on a
Windows machine.

  My  
 backup strategy is pretty much I don't want to be screwed if my RAID  
 goes away.  That said I have a few questions along those lines:

 - Most articles I've read suggest a full backup, followed by  
 incremental backups.  Is there any real reason to adopt that format  
 for a backup strategy like mine, or is it reasonable to just do a  
 dump 0 nightly?  I think the only reason to do just one full backup  
 per 'cycle' would be to preserve system resources, as I'm sure it's  
 fairly taxing on the system during dump 0 times.

Depends on your retore needs. Once you have backup you may soon find
out that you want to restore data from 2 days ago, if you keep only
the latest dump 0, you cannot.

If you plan to keep several runs of dump 0, going incremental will
save you time and disk space (at the cost of some time in case of
recovery, but we expect recovery will never be needed...)

 - Can dump incrementally update an existing dump, or is the idea that  
 a dump is a closed file and nothing except restore should ever touch it?

No. Incremental dumps are just other files, so for 50 GB of data and 5
GB changing each day, you would need a backup space equal to 50GB for
dump 0 plus 5GB for each incremental.

 - How much does running a backup through gzip actually save?  Is  
 taxing the system to compress the dump and the extra time it takes  
 actually worth it, assuming I have enough space on my backup drive to  
 support a dump 0 or two?

Not sure if dump is not already compressed.

 - Other folks dumping to a hard drive at night?  Care to share any of  
 your experiences/rationale?

I have been running Amanda for years, on tape because the tape drive
is still solid, next support will be external drive (USB, eSATA,
whatever comes next). I backup FreeBSD, some Linuxes, various
windows...

I use Gnu tar with Amanda, so my tapes can be read on many OS'es.

Olivier
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Backup advice

2007-05-23 Thread Doug Hardie


On May 23, 2007, at 19:03, Jason Lixfeld wrote:



On 23-May-07, at 9:23 PM, Doug Hardie wrote:

The criteria for selecting a backup approach is not the backup  
methodology but the restore methodology.


Excellent point.

Perhaps I'm asking the wrong question, so let me try it this way  
instead:


I'm looking for a backup solution that I can rely on in the event I  
have a catastrophic server failure.  Ideally this backup would look  
and act much like a clone of the production system.  In the worse  
case, I'd re-format the server array and copy the clone back to the  
server, setup the boot blocks, and that would be it.


Ideally this clone should be verifiable, meaning I should be able  
to verify it's integrity so that it's not going to let me down if I  
need it.


I'm thinking external USB hard drive of at least equal size to the  
server array size as far as hardware goes, but I'm lost as far as  
software goes.


What kind of data are you backing up?  If you are backing up the  
system and your data then you have to be very careful about links.   
Some backup solutions will copy the files as separate files.  When  
you restore the link is gone.  An update to one of the linked files  
will no longer be seen by the other names.  The OS uses a lot of  
links.  If all you are backing up is data, its probably not an  
issue.  I have used both dump and tar successfully.  I currently use  
tar as I have many directories I don't want to backup.  Tar requires  
some care and feeding to handle links properly.  It doesn't do it by  
default.  Dump does handle them properly by default.  Another option  
is rsync.  The advantage it has is that it only copies the changes in  
the file.  It will run a lot faster than dump or tar which will copy  
everything each time.  You do have to be careful with /dev if you are  
copying the root partition.


One backup disk is not all that great a safety approach.  You will  
never know if that drive has failed till you try and use it.  Then  
its too late.  Failures do not require that the drive hardware has  
failed.  Any interruption in the copy can cause an issue that may not  
be detected during the backup.  Sectors generally don't just go bad  
sitting on the shelf, but it does happen.  That was a significant  
problem with tapes.  Generally 25% of the tapes I used to get back  
from off-site storage after a month were no longer readable.


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: newbie: weekly tape backup advice

2003-06-26 Thread admin
On Thu, 26 Jun 2003 00:36:01 -0700, Ryan Merrick wrote
 admin wrote:
  
  Hi,
  
  I need some help setting up a tape backup system.  I have two FreeBSD machines
  and on external SCSI Onstream ADR50.  Got any clues how I can start a weekly
  back up plan here? 
  
  Thanks in advance,
  
  Noah
  
  ___
  [EMAIL PROTECTED] mailing list
  http://lists.freebsd.org/mailman/listinfo/freebsd-questions
  To unsubscribe, send any mail to [EMAIL PROTECTED]
  
 Take a look at afbackup in the ports at #/usr/ports/misc/afbackup .

okay this proggie is exactly what I need - do you have any clue how to figure
out the tape drive's device  blocksize?

- Noah

 
 Look at Storagemountain.com and look for articles by Curtis Preston.
 
 Ryan Merrick


___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


newbie: weekly tape backup advice

2003-06-23 Thread admin


Hi,

I need some help setting up a tape backup system.  I have two FreeBSD machines
and on external SCSI Onstream ADR50.  Got any clues how I can start a weekly
back up plan here? 

Thanks in advance,

Noah

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: newbie: weekly tape backup advice

2003-06-23 Thread Chris
admin wrote:

Hi,

I need some help setting up a tape backup system.  I have two FreeBSD machines
and on external SCSI Onstream ADR50.  Got any clues how I can start a weekly
back up plan here? 

Thanks in advance,

Noah

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]
 

The key would be, a tape a day? Just kidding. If you ca fit it all on 
one tape, and it's not a long backup - why not do a full backup apposed 
to some sort of incremental one.

A cron once a day should do the trick (man cron and man crontab) and I 
would think using dump (man dump) would also do the archiving.

--

Best regards,
Chris
__
PGP Fingerprint = D976 2575 D0B4 E4B0 45CC AA09 0F93 FF80 C01B C363

PGP Mail encouraged / preferred - keys available on common key servers
__
  01010010011101100011011001010111001001011000
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: newbie: weekly tape backup advice

2003-06-23 Thread Jerry McAllister
 
 Hi,
 
 I need some help setting up a tape backup system.  I have two FreeBSD machines
 and on external SCSI Onstream ADR50.  Got any clues how I can start a weekly
 back up plan here? 

It depends a little on the size of your disk compared to your tape
capacity.   It also depends on how much - amount and frequency - critical 
data changes.

If you can fit everything you want to back up on one tape, just run a 
full backup (level 0 dump) each time being once per day or once per week
or whatever fits your data change pattern.

If your critical data change is a lot and a full backup of it will take
more than one tape, pick a convenient day of the week and do a full
back up and then do incremental backups (level 1 dump) other days.   

If your disk is so big and the amount of change so much that a week's
worth of incremental backup needs more than one tape, then you will 
want to do a weekly full backup and then increasing levels of 
incremental back up (level 1 - 6) on the other days.   

If your amount of data change is quite low - say it is just hosting a
fairly static web site and some information database you look at but 
don't update very often, you might want to consider doing only a weekly 
full backup or a monthly full backup and weekly incremental backups.

Use enough tapes so you are keeping at least three copies of each part
of the rotation before reusing a tape.   

You may also want to do a quarterly or annual archive dump that you store
off site and do not reuse for several rotations.

For sure, you want to use dump(8).  It is part of the system, does the 
right things with the files and is reliable and doesn't take any tinkering.

Unless you have a lot of very critical files open and being changed all
the time, don't bother with the warnings about doing a dump on a non-running
system.   The dump will work just fine.It only means that some file may 
change between the time the dump started and when it finishes so that file's 
backup image might not be good.   But, if you are doing regular backups -_and 
not just reusing the same tape all the time_- you will catch that file in a 
good backup on another day.

The man page explains dump pretty well.  Mostly you shouldn't need to
worry about block size and all the other special stuff.  The defaults 
work best for most circumstances.   

Determining the media capacity may be the only difficult thing.  If one 
tape will hold the entire backup, just use the '-a' switch.   That can 
work well also for multiple tape dumps with tape drives that give a good 
end-of-media indication.  But some of them - DDS can be an annoying 
example - tend to not work well when getting near the end of media and 
will start getting write/read errors before the end-of-media indication 
actually happens.   Then, the system may not handle things very well and 
you may want to do some calculating and experimenting with either the '-B nnn' 
parameter or the '-d nnn' and '-s nnn' parameters to specify a media size 
and force it to change tapes before the problem area is reached.

You need to run dump(8) as root.  Eventually you will want to not have 
to retype the dump commands each time or you will want it to run by cron 
at some time you are not around, so either make a script and run it
while su-ed or logged in as root, or make a compiled program that will
do the dump calls and make it suid root and then make it owned by root
with a group of the only ids that will be allowed to run it and then
give it only 750 permissions.

One more thing from experience - do not run the head cleaner cartridge
any more often than you absolutely have to.  In a very clean environment
that can actually mean never.  But, you will probably need it now and
then.   Experience will tell when.   Those cleaners cause significant
wear on the heads and possibly the rest of the mechanism.   It doesn't
take much to wear those tiny heads down to nothing.  So, using them as
infrequently as possible will actually help increase head life, not 
reduce it as some of the accompanying printed material often likes to
imply.  I think they just way to sell more replacement tape drives.

jerry

 
 Thanks in advance,
 
 Noah
 
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]