Jon LaBadie wrote:

> Is this a problem you are having or just something you understand?

*ouch*. Please excuse my writing style. ;-)

> On most of my systems this non-zero status is not a problem.  On my cygwin client
> it did cause problems so I modified tar's source code to force a zero exits status.

Unfortunately, on the majority of my clients tar returns non-zero exit status, so i 
will have to find a workaround.

I have received three suggestions so far:

1. Hack tar to force it to return a non-error status

I think 1 is the best option for systems that only have logfiles, since the only thing 
that would be missing in the backup is the appended lines, which would be backed up 
the next day in any event. 

Q. Is there modified tar source posted somewhere on the net by some kind soul?

2. Ignore files which change size during a backup

This one is the least likely option, as even logfiles can be important.

3. Take a snapshot of the system before backing it up.

On systems that have files whose entire 'state' changes, it seems this is the most 
reasonable solution. Examples of this would be databases or hash files, etc., in which 
case a shell script could be written to take care of this before the backup runs.

Q: What do you do for large files which are constantly changing? for example, one of 
my files is 800MB and its 'state' changes fairly regularly, resulting in failed or 
inaccurate backups. I haven't figured out what to do with this one. I don't know of 
any way to 'lock' the file during a backup procedure, or even while copying it 
somewhere else, for that matter. Does anyone have a suggestion here?

Thank-you so much for your help!

cheers,
john.

Reply via email to