Re: "bad status on taper SHM-WRITE (dumper)" message

2021-01-05 Thread Gene Heskett
On Tuesday 05 January 2021 03:39:33 Gene Heskett wrote:

> On Sunday 03 January 2021 05:30:45 Gene Heskett wrote:
> > On Sunday 03 January 2021 05:12:55 Gene Heskett wrote:
> > > On Friday 01 January 2021 15:23:47 you wrote:
> > > copied the list too, in case Nathan wants to chime in.
> > >
> > > > On Fri, Jan 01, 2021 at 08:19:00AM -0500, Gene Heskett wrote:
> > > > > On Wednesday 30 December 2020 02:06:41 Jon LaBadie wrote:
> > > >
> > > > ...
> > > >
> > > > > I've captured the amstatus outout, would you like to see it?
> > > > > attached...
> > > >
> > > > What I'm really looking for is when/if it fails again, how/if
> > > > the amstatus output differs.
> > >
> > > aha, tonights saved amstat, normally a bit over 10k, is only 44
> > > bytes! copy/pasted:
> > > root@coyote:amstat.d$ ls -l
> > > total 64
> > > -rw-r--r-- 1 amanda amanda 10112 Dec 31 05:15 amstat-201231-0515
> > > -rw-r--r-- 1 amanda amanda 10112 Dec 31 05:24 amstat-201231-0524
> > > -rw-r--r-- 1 amanda amanda 10112 Dec 31 05:25 amstat-201231-0525
> > > -rw-rw-r-- 1 gene   gene   10112 Jan  1 02:38 amstat-210101-0238
> > > -rw-r--r-- 1 amanda amanda 10112 Jan  2 02:22 amstat-210102-0222
> > > -rw-r--r-- 1 amanda amanda44 Jan  3 02:36 amstat-210103-0236
> > > root@coyote:amstat.d$ cat amstat-210103-0236
> > > Using: /usr/local/var/amanda/Daily/amdump.1
> > >
> > > finally, a clue! but of what? I've not a clue...
> >
> > And I forgot the email msg:
> > amstatus: bad status on taper SHM-WRITE (dumper): 20
> > at /usr/local/share/perl/5.24.1/Amanda/Status.pm line 929, <$fd>
> > line 4102.
> >
> > > > RE the current amstatus, the tape lines are weird:
> > > >
> > > >   taped   :  78   13029m   12942m (100.68%) (100.68%)
> > > > tape 1:  78   13029m   13029m ( 15.15%) Dailys-39 (78
> > > > parts)
> > > >
> > > > It is obviously talking about one tape so I'd expect the lines
> > > > to match numerically.  But the percentages particularly do not.
> > >
> > > I have this again, it left the dump, I assume the first failed
> > > /home/gene in the holding disk in 2 parts because the crc failed
> > > on the first try, but it retried it again, but copied both
> > > 00014.coyote._home_gene.1 AND 00015.coyote._home_gene.1 to the
> > > /amandatapes/Dailys/data link. With a 2 minute difference in
> > > files creation times.
> > >
> > > So I ram cmp with a boatload of options to see where they were
> > > diff, which they were, getting if I hadn't got tired of watching
> > > it fly by, nearly a 100% difference in the first 400 megs.
> > >
> > > Since this error existed on a spinning rust holding disk, and now
> > > on a 240Gig SSD, I'm inclined to point a finger at gzip -best, but
> > > am open to other explanations too.
> > >
> > > The next question that points to is best answered by untaring that
> > > indice.tar.
> > >
> > > Boiling that down to size with grep as follows:
> > > root@coyote:data$ tar -tf indices.tar | grep /_home_gene/20210103
> > > - returns:
> > > usr/local/var/amanda/Daily/index/coyote/_home_gene/20210103020105_
> > >1. he ader
> > > usr/local/var/amanda/Daily/index/coyote/_home_gene/20210103020105_
> > >1. st ate.gz
> > > usr/local/var/amanda/Daily/index/coyote/_home_gene/20210103020105_
> > >1- un sorted.gz
> > >
> > > Which doesn't look like a double dump to me. Other explanations
> > > welcome.
> > >
> > > The two files in /amandatapes/Dailys/data, aren't quite the same
> > > size either: from an ls -l:
> > > -rw--- 1 amanda amanda 2547835625 Jan  3 02:14
> > > 00014.coyote._home_gene.1 -rw--- 1 amanda amanda 2548295549
> > > Jan 3 02:16 00015.coyote._home_gene.1
> > >
> > > > My most recent one doesn't match either, so must be normal:
> > > >
> > > >   taped   :  28   56238m   57697m ( 97.47%) ( 97.47%)
> > > > tape 1:  28   56238m   56238m ( 54.92%) DS1-158 (33
> > > > parts)
> > > >
> > > >
> > > > Jon
> > >
> > > To heck with it, I'm going back to bed.
>
> Did it again tonight. On a different DLE that it says it successfully
> retried. And the saved amstatus is only 44 bytes, same as he previous
> 3 failures.
>
> 'Using: /usr/local/var/amanda/Daily/amdump.1'
>
> Where its normally over 10k bytes.  Nuked the leftover dump
> in /sdb/dumps, and reran the wrapper. And its like that sending this
> email will generate another file removed before we read it error.
>
> Copyright 2019 by Maurice E. Heskett
> Cheers, Gene Heskett

Since gzip seems to be a common item in the failure report, I've 
reinstalled it. But it also ran for the second time tonight, generating 
no errors.

And sending the above email did not generate the file removed report.

I have also now done a complete re-install from src of 3.5.1. No options 
to ./configure. or make, both as amanda, install by root. I've also 
reduced the chunksize from 2000Mb to 1000Mb in amanda.conf.

And amcheck is happy.

Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, ju

Re: "bad status on taper SHM-WRITE (dumper)" message

2021-01-05 Thread Gene Heskett
On Sunday 03 January 2021 05:30:45 Gene Heskett wrote:

> On Sunday 03 January 2021 05:12:55 Gene Heskett wrote:
> > On Friday 01 January 2021 15:23:47 you wrote:
> > copied the list too, in case Nathan wants to chime in.
> >
> > > On Fri, Jan 01, 2021 at 08:19:00AM -0500, Gene Heskett wrote:
> > > > On Wednesday 30 December 2020 02:06:41 Jon LaBadie wrote:
> > >
> > > ...
> > >
> > > > I've captured the amstatus outout, would you like to see it?
> > > > attached...
> > >
> > > What I'm really looking for is when/if it fails again, how/if the
> > > amstatus output differs.
> >
> > aha, tonights saved amstat, normally a bit over 10k, is only 44
> > bytes! copy/pasted:
> > root@coyote:amstat.d$ ls -l
> > total 64
> > -rw-r--r-- 1 amanda amanda 10112 Dec 31 05:15 amstat-201231-0515
> > -rw-r--r-- 1 amanda amanda 10112 Dec 31 05:24 amstat-201231-0524
> > -rw-r--r-- 1 amanda amanda 10112 Dec 31 05:25 amstat-201231-0525
> > -rw-rw-r-- 1 gene   gene   10112 Jan  1 02:38 amstat-210101-0238
> > -rw-r--r-- 1 amanda amanda 10112 Jan  2 02:22 amstat-210102-0222
> > -rw-r--r-- 1 amanda amanda44 Jan  3 02:36 amstat-210103-0236
> > root@coyote:amstat.d$ cat amstat-210103-0236
> > Using: /usr/local/var/amanda/Daily/amdump.1
> >
> > finally, a clue! but of what? I've not a clue...
>
> And I forgot the email msg:
> amstatus: bad status on taper SHM-WRITE (dumper): 20
> at /usr/local/share/perl/5.24.1/Amanda/Status.pm line 929, <$fd> line
> 4102.
>
> > > RE the current amstatus, the tape lines are weird:
> > >
> > >   taped   :  78   13029m   12942m (100.68%) (100.68%)
> > > tape 1:  78   13029m   13029m ( 15.15%) Dailys-39 (78
> > > parts)
> > >
> > > It is obviously talking about one tape so I'd expect the lines
> > > to match numerically.  But the percentages particularly do not.
> >
> > I have this again, it left the dump, I assume the first failed
> > /home/gene in the holding disk in 2 parts because the crc failed
> > on the first try, but it retried it again, but copied both
> > 00014.coyote._home_gene.1 AND 00015.coyote._home_gene.1 to the
> > /amandatapes/Dailys/data link. With a 2 minute difference in
> > files creation times.
> >
> > So I ram cmp with a boatload of options to see where they were diff,
> > which they were, getting if I hadn't got tired of watching it fly
> > by, nearly a 100% difference in the first 400 megs.
> >
> > Since this error existed on a spinning rust holding disk, and now on
> > a 240Gig SSD, I'm inclined to point a finger at gzip -best, but am
> > open to other explanations too.
> >
> > The next question that points to is best answered by untaring that
> > indice.tar.
> >
> > Boiling that down to size with grep as follows:
> > root@coyote:data$ tar -tf indices.tar | grep /_home_gene/20210103 -
> > returns:
> > usr/local/var/amanda/Daily/index/coyote/_home_gene/20210103020105_1.
> >he ader
> > usr/local/var/amanda/Daily/index/coyote/_home_gene/20210103020105_1.
> >st ate.gz
> > usr/local/var/amanda/Daily/index/coyote/_home_gene/20210103020105_1-
> >un sorted.gz
> >
> > Which doesn't look like a double dump to me. Other explanations
> > welcome.
> >
> > The two files in /amandatapes/Dailys/data, aren't quite the same
> > size either: from an ls -l:
> > -rw--- 1 amanda amanda 2547835625 Jan  3 02:14
> > 00014.coyote._home_gene.1 -rw--- 1 amanda amanda 2548295549 Jan 
> > 3 02:16 00015.coyote._home_gene.1
> >
> > > My most recent one doesn't match either, so must be normal:
> > >
> > >   taped   :  28   56238m   57697m ( 97.47%) ( 97.47%)
> > > tape 1:  28   56238m   56238m ( 54.92%) DS1-158 (33 parts)
> > >
> > >
> > > Jon
> >
> > To heck with it, I'm going back to bed.

Did it again tonight. On a different DLE that it says it successfully 
retried. And the saved amstatus is only 44 bytes, same as he previous 3 
failures.

'Using: /usr/local/var/amanda/Daily/amdump.1'

Where its normally over 10k bytes.  Nuked the leftover dump 
in /sdb/dumps, and reran the wrapper. And its like that sending this 
email will generate another file removed before we read it error.

Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 


Re: "bad status on taper SHM-WRITE (dumper)" message

2021-01-03 Thread Gene Heskett
On Sunday 03 January 2021 05:12:55 Gene Heskett wrote:

> On Friday 01 January 2021 15:23:47 you wrote:
> copied the list too, in case Nathan wants to chime in.
>
> > On Fri, Jan 01, 2021 at 08:19:00AM -0500, Gene Heskett wrote:
> > > On Wednesday 30 December 2020 02:06:41 Jon LaBadie wrote:
> >
> > ...
> >
> > > I've captured the amstatus outout, would you like to see it?
> > > attached...
> >
> > What I'm really looking for is when/if it fails again, how/if the
> > amstatus output differs.
>
> aha, tonights saved amstat, normally a bit over 10k, is only 44 bytes!
> copy/pasted:
> root@coyote:amstat.d$ ls -l
> total 64
> -rw-r--r-- 1 amanda amanda 10112 Dec 31 05:15 amstat-201231-0515
> -rw-r--r-- 1 amanda amanda 10112 Dec 31 05:24 amstat-201231-0524
> -rw-r--r-- 1 amanda amanda 10112 Dec 31 05:25 amstat-201231-0525
> -rw-rw-r-- 1 gene   gene   10112 Jan  1 02:38 amstat-210101-0238
> -rw-r--r-- 1 amanda amanda 10112 Jan  2 02:22 amstat-210102-0222
> -rw-r--r-- 1 amanda amanda44 Jan  3 02:36 amstat-210103-0236
> root@coyote:amstat.d$ cat amstat-210103-0236
> Using: /usr/local/var/amanda/Daily/amdump.1
>
> finally, a clue! but of what? I've not a clue...

And I forgot the email msg:
amstatus: bad status on taper SHM-WRITE (dumper): 20 
at /usr/local/share/perl/5.24.1/Amanda/Status.pm line 929, <$fd> line 
4102.

> > RE the current amstatus, the tape lines are weird:
> >
> >   taped   :  78   13029m   12942m (100.68%) (100.68%)
> > tape 1:  78   13029m   13029m ( 15.15%) Dailys-39 (78 parts)
> >
> > It is obviously talking about one tape so I'd expect the lines
> > to match numerically.  But the percentages particularly do not.
>
> I have this again, it left the dump, I assume the first failed
> /home/gene in the holding disk in 2 parts because the crc failed
> on the first try, but it retried it again, but copied both
> 00014.coyote._home_gene.1 AND 00015.coyote._home_gene.1 to the
> /amandatapes/Dailys/data link. With a 2 minute difference in
> files creation times.
>
> So I ram cmp with a boatload of options to see where they were diff,
> which they were, getting if I hadn't got tired of watching it fly by,
> nearly a 100% difference in the first 400 megs.
>
> Since this error existed on a spinning rust holding disk, and now on a
> 240Gig SSD, I'm inclined to point a finger at gzip -best, but am open
> to other explanations too.
>
> The next question that points to is best answered by untaring that
> indice.tar.
>
> Boiling that down to size with grep as follows:
> root@coyote:data$ tar -tf indices.tar | grep /_home_gene/20210103 -
> returns:
> usr/local/var/amanda/Daily/index/coyote/_home_gene/20210103020105_1.he
>ader
> usr/local/var/amanda/Daily/index/coyote/_home_gene/20210103020105_1.st
>ate.gz
> usr/local/var/amanda/Daily/index/coyote/_home_gene/20210103020105_1-un
>sorted.gz
>
> Which doesn't look like a double dump to me. Other explanations
> welcome.
>
> The two files in /amandatapes/Dailys/data, aren't quite the same size
> either: from an ls -l:
> -rw--- 1 amanda amanda 2547835625 Jan  3 02:14
> 00014.coyote._home_gene.1 -rw--- 1 amanda amanda 2548295549 Jan  3
> 02:16 00015.coyote._home_gene.1
>
> > My most recent one doesn't match either, so must be normal:
> >
> >   taped   :  28   56238m   57697m ( 97.47%) ( 97.47%)
> > tape 1:  28   56238m   56238m ( 54.92%) DS1-158 (33 parts)
> >
> >
> > Jon
>
> To heck with it, I'm going back to bed.
>
> Cheers, Gene Heskett



Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 


Re: "bad status on taper SHM-WRITE (dumper)" message

2021-01-03 Thread Gene Heskett
On Friday 01 January 2021 15:23:47 you wrote:
copied the list too, in case Nathan wants to chime in.

> On Fri, Jan 01, 2021 at 08:19:00AM -0500, Gene Heskett wrote:
> > On Wednesday 30 December 2020 02:06:41 Jon LaBadie wrote:
>
> ...
>
> > I've captured the amstatus outout, would you like to see it?
> > attached...
>
> What I'm really looking for is when/if it fails again, how/if the
> amstatus output differs.

aha, tonights saved amstat, normally a bit over 10k, is only 44 bytes!
copy/pasted:
root@coyote:amstat.d$ ls -l
total 64
-rw-r--r-- 1 amanda amanda 10112 Dec 31 05:15 amstat-201231-0515
-rw-r--r-- 1 amanda amanda 10112 Dec 31 05:24 amstat-201231-0524
-rw-r--r-- 1 amanda amanda 10112 Dec 31 05:25 amstat-201231-0525
-rw-rw-r-- 1 gene   gene   10112 Jan  1 02:38 amstat-210101-0238
-rw-r--r-- 1 amanda amanda 10112 Jan  2 02:22 amstat-210102-0222
-rw-r--r-- 1 amanda amanda44 Jan  3 02:36 amstat-210103-0236
root@coyote:amstat.d$ cat amstat-210103-0236
Using: /usr/local/var/amanda/Daily/amdump.1

finally, a clue! but of what? I've not a clue...

> RE the current amstatus, the tape lines are weird:
>
>   taped   :  78   13029m   12942m (100.68%) (100.68%)
> tape 1:  78   13029m   13029m ( 15.15%) Dailys-39 (78 parts)
>
> It is obviously talking about one tape so I'd expect the lines
> to match numerically.  But the percentages particularly do not.
>
I have this again, it left the dump, I assume the first failed 
/home/gene in the holding disk in 2 parts because the crc failed
on the first try, but it retried it again, but copied both 
00014.coyote._home_gene.1 AND 00015.coyote._home_gene.1 to the
/amandatapes/Dailys/data link. With a 2 minute difference in
files creation times.

So I ram cmp with a boatload of options to see where they were diff,
which they were, getting if I hadn't got tired of watching it fly by,
nearly a 100% difference in the first 400 megs.

Since this error existed on a spinning rust holding disk, and now on a 
240Gig SSD, I'm inclined to point a finger at gzip -best, but am open
to other explanations too.

The next question that points to is best answered by untaring that indice.tar.

Boiling that down to size with grep as follows:
root@coyote:data$ tar -tf indices.tar | grep /_home_gene/20210103 -
returns:
usr/local/var/amanda/Daily/index/coyote/_home_gene/20210103020105_1.header
usr/local/var/amanda/Daily/index/coyote/_home_gene/20210103020105_1.state.gz
usr/local/var/amanda/Daily/index/coyote/_home_gene/20210103020105_1-unsorted.gz

Which doesn't look like a double dump to me. Other explanations welcome.

The two files in /amandatapes/Dailys/data, aren't quite the same size either:
from an ls -l:
-rw--- 1 amanda amanda 2547835625 Jan  3 02:14 00014.coyote._home_gene.1
-rw--- 1 amanda amanda 2548295549 Jan  3 02:16 00015.coyote._home_gene.1

> My most recent one doesn't match either, so must be normal:
>
>   taped   :  28   56238m   57697m ( 97.47%) ( 97.47%)
> tape 1:  28   56238m   56238m ( 54.92%) DS1-158 (33 parts)
>
>
> Jon
To heck with it, I'm going back to bed.

Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 


Re: "bad status on taper SHM-WRITE (dumper)" message

2020-12-27 Thread Gene Heskett
On Wednesday 23 December 2020 16:50:40 Gene Heskett wrote:

> On Wednesday 23 December 2020 12:27:57 Nathan Stratton Treadway wrote:
> > On Wed, Dec 23, 2020 at 08:11:25 -0500, Gene Heskett wrote:
> > > amstatus: bad status on taper SHM-WRITE (dumper): 20 at
> > > /usr/local/share/perl/5.24.1/Amanda/Status.pm line 929, <$fd> line
> > > 3411.
> >
screwed up again last night. But my script fixed it so its still a good 
backup. The original complaint was the Chromium had done some cache 
cleaning while the backup was running.

That sort of "file removed before we read it" was NOT a show stopper a 
year ago. Whats been changed since?

> > [...]
> >
> > > But that log was overwritten by the flush.sh I did trying to
> > > complete the backup on vtape-30, so is gone forever. But vtape-31
> > > was not
> >
> > In Amanda 3.5, the "/var/log/amanda//amdump" path is
> > actually just a symlink pointing to the currently-active amdump file
> > among the multiple timestamped files (amdump.MMDDhhmmss), so the
> > original log file should still be out there.
> >
> > It will be interesting to compare the contents of the logs from
> > working and non-working runs.  So:
> >
> > 1) what's the timestamp for the run that generated this error?
> >
> > 2) what does
> >  $ grep SHM-WRITE amdump.202012[12]*
> >show (when run from within the correct .../log/amanda/...
> > directory)?
> >
> >(The idea being to grep through all the amdump.* files from the
> > past 13 days, just as a quick way to hit both good and bad runs.)
> >
> > 3) is there any correlation between the runs where amstatus returns
> > this error and other interesting messages appearing in the Amanda
> > Mail Report for those runs?
>
> You may want to fine tune that, its several hundred hits. Of very long
> lines. 200 + chars in many of them. Errors do not appear to stand out
> in all that noise.
>
> > Nathan
> >
> > 
> >-- -- Nathan Stratton Treadway  -  [email protected]  - 
> > Mid-Atlantic region Ray Ontko & Co.  -  Software consulting services
> >  -
> > http://www.ontko.com/ GPG Key:
> > http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239 Key
> > fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239
>
> Copyright 2019 by Maurice E. Heskett
> Cheers, Gene Heskett



Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 


Re: "bad status on taper SHM-WRITE (dumper)" message

2020-12-23 Thread Gene Heskett
On Wednesday 23 December 2020 12:27:57 Nathan Stratton Treadway wrote:

> On Wed, Dec 23, 2020 at 08:11:25 -0500, Gene Heskett wrote:
> > amstatus: bad status on taper SHM-WRITE (dumper): 20 at
> > /usr/local/share/perl/5.24.1/Amanda/Status.pm line 929, <$fd> line
> > 3411.
>
> [...]
>
> > But that log was overwritten by the flush.sh I did trying to
> > complete the backup on vtape-30, so is gone forever. But vtape-31
> > was not
>
> In Amanda 3.5, the "/var/log/amanda//amdump" path is actually
> just a symlink pointing to the currently-active amdump file among the
> multiple timestamped files (amdump.MMDDhhmmss), so the original
> log file should still be out there.
>
> It will be interesting to compare the contents of the logs from
> working and non-working runs.  So:
>
> 1) what's the timestamp for the run that generated this error?
>
> 2) what does
>  $ grep SHM-WRITE amdump.202012[12]*
>show (when run from within the correct .../log/amanda/...
> directory)?
>
>(The idea being to grep through all the amdump.* files from the
> past 13 days, just as a quick way to hit both good and bad runs.)
>
> 3) is there any correlation between the runs where amstatus returns
> this error and other interesting messages appearing in the Amanda Mail
> Report for those runs?

You may want to fine tune that, its several hundred hits. Of very long 
lines. 200 + chars in many of them. Errors do not appear to stand out in 
all that noise.

>   Nathan
>
> --
>-- Nathan Stratton Treadway  -  [email protected]  -  Mid-Atlantic
> region Ray Ontko & Co.  -  Software consulting services  -  
> http://www.ontko.com/ GPG Key:
> http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239 Key
> fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239



Copyright 2019 by Maurice E. Heskett
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 


Re: "bad status on taper SHM-WRITE (dumper)" message

2020-12-23 Thread Nathan Stratton Treadway
On Wed, Dec 23, 2020 at 08:11:25 -0500, Gene Heskett wrote:
> amstatus: bad status on taper SHM-WRITE (dumper): 20 at 
> /usr/local/share/perl/5.24.1/Amanda/Status.pm line 929, <$fd> 
> line 3411.
[...] 
> But that log was overwritten by the flush.sh I did trying to complete 
> the backup on vtape-30, so is gone forever. But vtape-31 was not 


In Amanda 3.5, the "/var/log/amanda//amdump" path is actually
just a symlink pointing to the currently-active amdump file among the multiple
timestamped files (amdump.MMDDhhmmss), so the original log file should
still be out there.

It will be interesting to compare the contents of the logs from working
and non-working runs.  So:

1) what's the timestamp for the run that generated this error?

2) what does 
 $ grep SHM-WRITE amdump.202012[12]*
   show (when run from within the correct .../log/amanda/... directory)?  

   (The idea being to grep through all the amdump.* files from the past
   13 days, just as a quick way to hit both good and bad runs.)

3) is there any correlation between the runs where amstatus returns this
   error and other interesting messages appearing in the Amanda Mail
   Report for those runs?

Nathan


Nathan Stratton Treadway  -  [email protected]  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239