Okay, I did the following command on the cobalt01 machine:
---
/usr/local/libexec/runtar --create --directory /home --listed-incremental
/tmp/ff --sparse --one-file-system --ignore-failed-read --totals --file -
.>/dev/null
---
It gave me this result:
---
/usr/local/libexec/runtar: ./mysql/mysql.sock: socket ignored
/usr/local/libexec/runtar: ./tmp/.s.PGSQL.5432: socket ignored
Total bytes written: 165928960 (158MB, 79MB/s)
---
160 Mb, not that much huh, and it gives me the answer in just 1.5 second or
so..
And I'm not quite sure that the duration of 1.5 second is okay... Is the
command used to make an
inventory of the data to backup?
The result doesn't look fatal, and probably isn't at all.
Would it help if I should exclude these files?
Or can I only exclude directories? Is there an example exclude.gtar file?
I've been unlucky in finding one up to now.
I haven't been able to do an amcheck while the process runs to see if any
data is transmitted at all. This is because the backup runs at night, and I
don't know if I can simulate a backup without using the backup tapes (thus
not increasing the tapenumber and amount of backups made). If I know that,
it would help me, so I can simulate and test...
Thanks again for additional help.
----- Original Message -----
From: John R. Jackson <[EMAIL PROTECTED]>
To: Robert Hoekstra <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Saturday, June 23, 2001 4:09 AM
Subject: Re: Problem backing up
> >FAIL dumper cobalt01.somewhere.nl /home 0 [data timeout]
> >...
> >Could it be because of MySQL (b)locking read-access to the tar process?
>
> As Olivier said, it's unlikely to be a locking problem. Unix does not,
> in general, apply mandatory locks. They are advisory, meaning a process
> (such as a backup program like GNU tar) is free to ignore them.
>
> If you watch the holding disk (or amstatus) while this is going on, how
> big does the image get? I'm interested in whether it's zero length,
> bigger than just the 32 KByte header, or if a lot of stuff got dumped
> and then it stalled.
>
> Here's another test. The /tmp/amanda/sendbackup*debug file on the client
> shows the GNU tar command line used.
>
> If you're doing a full dump (level 0), create a zero length temp
> file someplace ("cp /dev/null /tmp/xxx").
>
> Otherwise, look for the --listed-incremental flag file name. It should
> have a ".new" suffix. A file with that same name but without the ".new"
> should exist on the client. Copy that to the temp file.
>
> Now, run the command sendbackup tried to run but change the file passed
> to --listed-incremental to the temp file you just created. Send stdout
> of GNU tar to /dev/null (with output redirection -- do **not** change
> the --file argument) and see what happens.
>
> You can either run it like this as the Amanda user:
>
> .../runtar --create --file - ... > /dev/null
>
> or run it as root:
>
> gtar --create --file - ... > /dev/null
>
> As an alternative to ignoring the output, pipe the first GNU tar
> into a second that just does a catalogue ("... | gtar tvf -"). If it
> consistently gets to the same place when it hangs, that will tell you
> a lot about where the problem is.
>
> If it works fine and amdump/sendbackup still does not work, it's going
> to take some more head scratching (and probably some patches to try and
> log what's going on -- are you up for that?).
>
> >PS. the message is in plain text. ;-) ...
>
> Thanks! :-)
>
> John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
>