Re: following the "balance" report

2018-11-12 Thread Gene Heskett
On Monday 12 November 2018 18:52:18 Debra S Baddorf wrote:

> > On Nov 12, 2018, at 5:30 PM, Gene Heskett 
> > wrote:
> >
> > On Monday 12 November 2018 15:44:04 Nathan Stratton Treadway wrote:
> >> On Mon, Nov 12, 2018 at 15:18:05 -0500, Gene Heskett wrote:
> >>>> So, if you want to do the deep dive to try to debug that quirk...
> >>>> the next step will be to look through the
> >>>>  /var/log/amanda/Daily/amdump.20181112*
> >>>> log file (assuming the run in question started after midnight
> >>>> this morning; if not the datetime-stamp would be 2018* of
> >>>> course).
> >>>
> >>> It did, I have some other stuff so it isn't started till about
> >>> 03:05.
> >>>
> >>>> Find the "ANALYZING ESTIMATES..." section, and cut-and-paste the
> >>>> log lines that discuss coyote:/var from that section on down
> >>>> through INITIAL SCHEDULE, PROMOTING DUMPS IF NEEDED, and on
> >>>> through the GENERATING SCHEDULE section  and we'll see if
> >>>> those lines tell us anything useful...
> >>>
> >>> Ok but my local prefix is /usr/local, and there is no such text
> >>> in /usr/local/tmp*, so the .debug files there are quite small, 15
> >>> to 20
> >>
> >> (Look at the top of the output of "amadmin Daily status"; the very
> >> first line should be
> >>  Using: [PATH...]/amdump.1
> >> , where [PATH...] is the path to the directory containing the
> >> amdump.* and log.* files.  [That will be a different directory than
> >> the one containing the *.debug log files.])
> >>
> >>Nathan
> >
> > Mm, my amadmin from a 3.5.1 (p1) build doesn't claim to have a
> > status option:
> > amanda@coyote:/tmp/amanda-dbg$ /usr/local/sbin/amadmin Daily status
> > /usr/local/sbin/amadmin: unknown command "status"
> >
> > And I do not think I am specifically building it that way.
> >
> > I seem to be full of puzzles. And I'm equally sure it is being an
> > excedrin headache. :(
>
> Gene - the command was corrected later.  It should be
> amstatus  Daily
>
> Deb Baddorf

Yup, Jon fixed it.


Copyright 2018 by Maurice E. Heskett
-- 
Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page <http://geneslinuxbox.net:6309/gene>


Re: following the "balance" report

2018-11-12 Thread Debra S Baddorf



> On Nov 12, 2018, at 5:30 PM, Gene Heskett  wrote:
> 
> On Monday 12 November 2018 15:44:04 Nathan Stratton Treadway wrote:
> 
>> On Mon, Nov 12, 2018 at 15:18:05 -0500, Gene Heskett wrote:
>>>> So, if you want to do the deep dive to try to debug that quirk...
>>>> the next step will be to look through the
>>>>  /var/log/amanda/Daily/amdump.20181112*
>>>> log file (assuming the run in question started after midnight this
>>>> morning; if not the datetime-stamp would be 2018* of course).
>>> 
>>> It did, I have some other stuff so it isn't started till about
>>> 03:05.
>>> 
>>>> Find the "ANALYZING ESTIMATES..." section, and cut-and-paste the
>>>> log lines that discuss coyote:/var from that section on down
>>>> through INITIAL SCHEDULE, PROMOTING DUMPS IF NEEDED, and on
>>>> through the GENERATING SCHEDULE section  and we'll see if
>>>> those lines tell us anything useful...
>>> 
>>> Ok but my local prefix is /usr/local, and there is no such text
>>> in /usr/local/tmp*, so the .debug files there are quite small, 15 to
>>> 20
>> 
>> (Look at the top of the output of "amadmin Daily status"; the very
>> first line should be
>>  Using: [PATH...]/amdump.1
>> , where [PATH...] is the path to the directory containing the amdump.*
>> and log.* files.  [That will be a different directory than the one
>> containing the *.debug log files.])
>> 
>>  Nathan
> 
> Mm, my amadmin from a 3.5.1 (p1) build doesn't claim to have a status 
> option:
> amanda@coyote:/tmp/amanda-dbg$ /usr/local/sbin/amadmin Daily status
> /usr/local/sbin/amadmin: unknown command "status"
> 
> And I do not think I am specifically building it that way.
> 
> I seem to be full of puzzles. And I'm equally sure it is being an 
> excedrin headache. :(
> 

Gene - the command was corrected later.  It should be
amstatus  Daily

Deb Baddorf




Re: following the "balance" report

2018-11-12 Thread Gene Heskett
On Monday 12 November 2018 15:44:04 Nathan Stratton Treadway wrote:

> On Mon, Nov 12, 2018 at 15:18:05 -0500, Gene Heskett wrote:
> > > So, if you want to do the deep dive to try to debug that quirk...
> > > the next step will be to look through the
> > >   /var/log/amanda/Daily/amdump.20181112*
> > > log file (assuming the run in question started after midnight this
> > > morning; if not the datetime-stamp would be 2018* of course).
> >
> > It did, I have some other stuff so it isn't started till about
> > 03:05.
> >
> > > Find the "ANALYZING ESTIMATES..." section, and cut-and-paste the
> > > log lines that discuss coyote:/var from that section on down
> > > through INITIAL SCHEDULE, PROMOTING DUMPS IF NEEDED, and on
> > > through the GENERATING SCHEDULE section  and we'll see if
> > > those lines tell us anything useful...
> >
> > Ok but my local prefix is /usr/local, and there is no such text
> > in /usr/local/tmp*, so the .debug files there are quite small, 15 to
> > 20
>
> (Look at the top of the output of "amadmin Daily status"; the very
> first line should be
>   Using: [PATH...]/amdump.1
> , where [PATH...] is the path to the directory containing the amdump.*
> and log.* files.  [That will be a different directory than the one
> containing the *.debug log files.])
>
>   Nathan

Mm, my amadmin from a 3.5.1 (p1) build doesn't claim to have a status 
option:
amanda@coyote:/tmp/amanda-dbg$ /usr/local/sbin/amadmin Daily status
/usr/local/sbin/amadmin: unknown command "status"

And I do not think I am specifically building it that way.

I seem to be full of puzzles. And I'm equally sure it is being an 
excedrin headache. :(

Copyright 2018 by Maurice E. Heskett
-- 
Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page <http://geneslinuxbox.net:6309/gene>


Re: following the "balance" report

2018-11-12 Thread Nathan Stratton Treadway
On Mon, Nov 12, 2018 at 16:29:45 -0500, Jon LaBadie wrote:
> On Mon, Nov 12, 2018 at 03:44:04PM -0500, Nathan Stratton Treadway wrote:
> > (Look at the top of the output of "amadmin Daily status"; the very first
> > line should be 
> >   Using: [PATH...]/amdump.1
> > , where [PATH...] is the path to the directory containing the amdump.*
> > and log.* files.  [That will be a different directory than the one
> > containing the *.debug log files.])
> > 
> 
> I think you meant "amstatus Daily".

(Yes, you are correct, thanks for catching that.)

Nathan


Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: following the "balance" report

2018-11-12 Thread Jon LaBadie
On Mon, Nov 12, 2018 at 03:44:04PM -0500, Nathan Stratton Treadway wrote:
> On Mon, Nov 12, 2018 at 15:18:05 -0500, Gene Heskett wrote:
> > > So, if you want to do the deep dive to try to debug that quirk... the
> > > next step will be to look through the
> > >   /var/log/amanda/Daily/amdump.20181112*
> > > log file (assuming the run in question started after midnight this
> > > morning; if not the datetime-stamp would be 2018* of course).
> > 
> > It did, I have some other stuff so it isn't started till about 03:05.
> > 
> > > Find the "ANALYZING ESTIMATES..." section, and cut-and-paste the log
> > > lines that discuss coyote:/var from that section on down through
> > > INITIAL SCHEDULE, PROMOTING DUMPS IF NEEDED, and on through the
> > > GENERATING SCHEDULE section  and we'll see if those lines tell us
> > > anything useful...
> > 
> > Ok but my local prefix is /usr/local, and there is no such text 
> > in /usr/local/tmp*, so the .debug files there are quite small, 15 to 20 
> 
> (Look at the top of the output of "amadmin Daily status"; the very first
> line should be 
>   Using: [PATH...]/amdump.1
> , where [PATH...] is the path to the directory containing the amdump.*
> and log.* files.  [That will be a different directory than the one
> containing the *.debug log files.])
> 

I think you meant "amstatus Daily".

Jon
-- 
Jon H. LaBadie j...@jgcomp.com
 11226 South Shore Rd.  (703) 787-0688 (H)
 Reston, VA  20190  (703) 935-6720 (C)


Re: following the "balance" report

2018-11-12 Thread Nathan Stratton Treadway
On Mon, Nov 12, 2018 at 15:18:05 -0500, Gene Heskett wrote:
> > So, if you want to do the deep dive to try to debug that quirk... the
> > next step will be to look through the
> >   /var/log/amanda/Daily/amdump.20181112*
> > log file (assuming the run in question started after midnight this
> > morning; if not the datetime-stamp would be 2018* of course).
> 
> It did, I have some other stuff so it isn't started till about 03:05.
> 
> > Find the "ANALYZING ESTIMATES..." section, and cut-and-paste the log
> > lines that discuss coyote:/var from that section on down through
> > INITIAL SCHEDULE, PROMOTING DUMPS IF NEEDED, and on through the
> > GENERATING SCHEDULE section  and we'll see if those lines tell us
> > anything useful...
> 
> Ok but my local prefix is /usr/local, and there is no such text 
> in /usr/local/tmp*, so the .debug files there are quite small, 15 to 20 

(Look at the top of the output of "amadmin Daily status"; the very first
line should be 
  Using: [PATH...]/amdump.1
, where [PATH...] is the path to the directory containing the amdump.*
and log.* files.  [That will be a different directory than the one
containing the *.debug log files.])

Nathan



Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: following the "balance" report

2018-11-12 Thread Gene Heskett
 of the report shows that coyote /var was actually a
> > level 0. coyote  /var  0   10971   10971   --   9:07 20543.3 
> > 2:55 64197.3
>
> Okay, that does seem odd.
>
> Normally I would say something about how the "bumped to level N" lines
> don't tell you that Amanda is actually going to do an incremental on
> that run, but only tells you what level it would plann to do if it
> ends up deciding to do an incremental for that DLE.
>
> However, off hand I would have expected colyote:/var to be mentioned
> in the "promoted from N days ahead" lines, but it's not there (and all
> 21 promoted DLEs are listed, so it seems something else is up).

I've had that impression for years. :)
>
> So, if you want to do the deep dive to try to debug that quirk... the
> next step will be to look through the
>   /var/log/amanda/Daily/amdump.20181112*
> log file (assuming the run in question started after midnight this
> morning; if not the datetime-stamp would be 2018* of course).

It did, I have some other stuff so it isn't started till about 03:05.

> Find the "ANALYZING ESTIMATES..." section, and cut-and-paste the log
> lines that discuss coyote:/var from that section on down through
> INITIAL SCHEDULE, PROMOTING DUMPS IF NEEDED, and on through the
> GENERATING SCHEDULE section  and we'll see if those lines tell us
> anything useful...

Ok but my local prefix is /usr/local, and there is no such text 
in /usr/local/tmp*, so the .debug files there are quite small, 15 to 20 
lines. So given that my amanda gh.cf prefix is /usr/local, where do I 
find these log amdump.20181112*.debug files? I am getting lost in a 
20,000 acre forest, yet my backup.sh wrapper is deleting the entry's for 
reused vtapes.
 
> (Also, "amadmin Dailys info coyote /var" might possibly tell us
> something interesting as well.)

Hu, its not Dailys, but Daily in that path:
amanda@coyote:~$ /usr/local/sbin/amadmin Daily info coyote /var

Current info for coyote /var:
  Stats: dump rates (kps), Full:  20538.4, 20915.8, 23416.4
Incremental:  18325.7, 21792.3, 50128.9
  compressed size, Full:  92.6%, 92.5%, 92.5%
Incremental:  19.5%, 15.7%, 28.5%
  Dumps: lev datestmp  tape file   origK   compK secs
  0  20181112  Dailys-4231 11234520 11234520 547

>
>   Nathan

So I'm out for a couple hours at least, I've a gcode file to finetune to 
cut out a db25 panel cutout. And the mill is too small to be rigid 
enough. Even carbide cutting tool will bend a few degrees before it 
breaks...

[...]

Copyright 2018 by Maurice E. Heskett
-- 
Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page <http://geneslinuxbox.net:6309/gene>


Re: All level 0 on the same run?

2018-11-12 Thread Nathan Stratton Treadway
On Mon, Nov 12, 2018 at 13:56:33 -0500, Chris Nighswonger wrote:
> backup@scriptor:~ amadmin campus info host dev
> 
> Current info for host dev:
>   Stats: dump rates (kps), Full:  6015.9, 5898.2, 5771.4
> Incremental:  1658.2, 1456.6, 343.4
>   compressed size, Full:  66.9%, 67.4%, 67.8%
> Incremental:  71.3%, 68.0%, 67.4%
>   Dumps: lev datestmp  tape file   origK   compK secs
>   0  19691231   0 -1 -1 -1
> 
>  That timestamp is a bit odd, but...
> 
> (I don't know off hand why Amanada would have saved a "no data recorded"
> > status rather than still having a record of the last time ZWC ran
> > sucessfully -- perhaps the last successful dump went to a volume that
> > has since been overwritten, and so the entry had to be deleted from the
> > info database without anything new replacing it?)
> >
> >
> This makes the most sense. This client has been offline for two cycles.

Yeah, notice the "tape" column is empty, and the rest of the columes are
0 or -1.  It definitely removed all traces of whatever was there before
(and presumably there was something there at some point, since the
Stats: section has data), so the "last succcessfully dump's volume has
since been overwritten" explanation seems to fit.

Nathan



Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Balance after DLE splitting

2018-11-12 Thread Chris Nighswonger
Was "All level 0 on the same run?"

The old thread was getting a bit stranded...

On Sat, Nov 10, 2018 at 5:03 PM Nathan Stratton Treadway 
wrote:

> On Sat, Nov 10, 2018 at 11:39:58 -0500, Chris Nighswonger wrote:
> > I think that is output from one of Gene's systems, but here is the latest
> > from mine after DLE balancing has been through one successful run. (The
> > next run will take place Monday morning @ 0200).
> >
> > backup@scriptor:~ amadmin campus balance
> >
> >  due-date  #fsorig kB out kB   balance
> > --
> > 11/10 Sat4  676705513  447515666   +178.0%
> > 11/11 Sun889438595250400-96.7%
> > 11/12 Mon   12  127984592   84074623-47.8%
> > 11/13 Tue   19  304110025  267932333+66.5%
> > 11/14 Wed0  0  0  ---
> > --
> > TOTAL   43 1117743989  804773022 160954604
> >   (estimated 5 runs per dumpcycle)
> >  (3 filesystems overdue. The most being overdue 17841 days.)
> >
> > Not sure what's up with the overdues. There were none prior to breaking
> up
> > the DLEs. It may just be an artifact.
>
>
>
This afternoon's balance report looks much nicer:

backup@scriptor:~ amadmin campus balance

 due-date  #fsorig kB out kB   balance
--
11/12 Mon   23  421824968  306613160+90.3%
11/13 Tue   19  304110025  267932333+66.3%

11/16 Fri1  393088305  230949229+43.4%
--
TOTAL   43 1119023298  805494722 161098944
  (estimated 5 runs per dumpcycle)
 (11 filesystems overdue. The most being overdue 17843 days.)

Kind Regards,
Chris


Re: All level 0 on the same run?

2018-11-12 Thread Chris Nighswonger
On Mon, Nov 12, 2018 at 1:50 PM Nathan Stratton Treadway 
wrote:

> On Mon, Nov 12, 2018 at 13:28:15 -0500, Chris Nighswonger wrote:
> > I found the long overdue culprit... It was a windows client using the ZWC
> > community client. The ZWC service had hung (not an uncommon problem) and
> > Amanda was missing backups from it. If I had been paying attention to the
> > nightly reports, I would have investigated it sooner. However, it does
> beg
> > the question: That DLE is NOT 49 years overdue... So where in the world
> did
> > the planner get that idea from?
>
> Again, that 1969 date is the "no data saved" timestamp.  You can see
> this mostly-directly by doing "amadmin campus info [HOST] [DEV]" on that
> DLE... or completely-directly by looking at the
>  .../campus/curinfo/[HOST]/[DEV]/info
> file (where the "seconds since the epoch" timestamp will show up as "-1"
> instead of a number in the range around 1542048289).
>
>
backup@scriptor:~ amadmin campus info host dev

Current info for host dev:
  Stats: dump rates (kps), Full:  6015.9, 5898.2, 5771.4
Incremental:  1658.2, 1456.6, 343.4
  compressed size, Full:  66.9%, 67.4%, 67.8%
Incremental:  71.3%, 68.0%, 67.4%
  Dumps: lev datestmp  tape file   origK   compK secs
  0  19691231   0 -1 -1 -1

 That timestamp is a bit odd, but...

(I don't know off hand why Amanada would have saved a "no data recorded"
> status rather than still having a record of the last time ZWC ran
> sucessfully -- perhaps the last successful dump went to a volume that
> has since been overwritten, and so the entry had to be deleted from the
> info database without anything new replacing it?)
>
>
This makes the most sense. This client has been offline for two cycles.

Chris


Re: All level 0 on the same run?

2018-11-12 Thread Nathan Stratton Treadway
On Mon, Nov 12, 2018 at 13:28:15 -0500, Chris Nighswonger wrote:
> On Sat, Nov 10, 2018 at 5:03 PM Nathan Stratton Treadway 
> wrote:
> 
> > On Sat, Nov 10, 2018 at 11:39:58 -0500, Chris Nighswonger wrote:
> > >  (3 filesystems overdue. The most being overdue 17841 days.)
> > >
> > > Not sure what's up with the overdues. There were none prior to breaking
> > up
> > > the DLEs. It may just be an artifact.
> >
> > With a 5-day dumpcycle that would mean Amanda thinks the last dump took
> > place 17846-ish days ago:
> >   $ date --date="17846 days ago"
> >   Wed Dec 31 14:24:39 EST 1969
> > ... and the date of 1969/12/13 is the "no data saved" placeholder date
> > within Amanda's info database.
> >
> > Anyway, you should be able to identify the three DLEs in question with
> >   amadmin campus due | grep "Overdue"
> > and then use "amadmin campus info [...]" to see what amanda has recorded
> > about them.
> >
> >
> I found the long overdue culprit... It was a windows client using the ZWC
> community client. The ZWC service had hung (not an uncommon problem) and
> Amanda was missing backups from it. If I had been paying attention to the
> nightly reports, I would have investigated it sooner. However, it does beg
> the question: That DLE is NOT 49 years overdue... So where in the world did
> the planner get that idea from?

Again, that 1969 date is the "no data saved" timestamp.  You can see
this mostly-directly by doing "amadmin campus info [HOST] [DEV]" on that
DLE... or completely-directly by looking at the 
 .../campus/curinfo/[HOST]/[DEV]/info 
file (where the "seconds since the epoch" timestamp will show up as "-1"
instead of a number in the range around 1542048289).

(I don't know off hand why Amanada would have saved a "no data recorded"
status rather than still having a record of the last time ZWC ran
sucessfully -- perhaps the last successful dump went to a volume that
has since been overwritten, and so the entry had to be deleted from the
info database without anything new replacing it?)

Nathan


Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: All level 0 on the same run?

2018-11-12 Thread Chris Nighswonger
On Sat, Nov 10, 2018 at 5:03 PM Nathan Stratton Treadway 
wrote:

> On Sat, Nov 10, 2018 at 11:39:58 -0500, Chris Nighswonger wrote:
> >  (3 filesystems overdue. The most being overdue 17841 days.)
> >
> > Not sure what's up with the overdues. There were none prior to breaking
> up
> > the DLEs. It may just be an artifact.
>
> With a 5-day dumpcycle that would mean Amanda thinks the last dump took
> place 17846-ish days ago:
>   $ date --date="17846 days ago"
>   Wed Dec 31 14:24:39 EST 1969
> ... and the date of 1969/12/13 is the "no data saved" placeholder date
> within Amanda's info database.
>
> Anyway, you should be able to identify the three DLEs in question with
>   amadmin campus due | grep "Overdue"
> and then use "amadmin campus info [...]" to see what amanda has recorded
> about them.
>
>
I found the long overdue culprit... It was a windows client using the ZWC
community client. The ZWC service had hung (not an uncommon problem) and
Amanda was missing backups from it. If I had been paying attention to the
nightly reports, I would have investigated it sooner. However, it does beg
the question: That DLE is NOT 49 years overdue... So where in the world did
the planner get that idea from?

I'll check the balance again after tonight's run.

Kind regards,
Chris


Re: following the "balance" report

2018-11-12 Thread Nathan Stratton Treadway
On Mon, Nov 12, 2018 at 06:40:56 -0500, Gene Heskett wrote:
> amanda@coyote:/root$ /usr/local/sbin/amadmin Daily balance
> 
>  due-date  #fsorig MB out MB   balance
> --
> 11/12 Mon1  32963   7875-53.6%
> 11/13 Tue1   7688   7688-54.7%
> 11/14 Wed2  22109  22109+30.3%
> 11/15 Thu4  75027  46623   +174.9%
> 11/16 Fri4   3506   3506-79.3%
> 11/17 Sat   14  12127   7581-55.3%
> 11/18 Sun4  21281  16842 -0.7%
> 11/19 Mon   14  27513  15343 -9.6%
> 11/20 Tue1  49240  25295+49.1%
> 11/21 Wed   22  24718  16774 -1.1%
> --
> TOTAL   67 276172 169636 16963
>   (estimated 10 runs per dumpcycle)
> 
> Which is only microscopicly better. Snd the churn returns

Hmmm, yes, now things are getting messy


However, I think much of the churn you saw in this run can be explained
based on the balance figures:

In your balance table from a day ago, the entry for today's run was:

  11/11 Sun1  10886  10886-35.8%

... but note in the table after today's run that the 11/21 entry (which
normally would have included the same DLE(s) as the 11/11 did in
yesterday's table) is pretty close to a zero balance.  

So I think basically the idea is that the planner noticed that the total
of full dumps scheduled for that last run was much less than the target
(average) daily size, and so it moved forward a whole bunch (i.e. 21)
smallish DLEs from 11/16, 11/17, and 11/19, thus pulling the runs total
up to the target size.

However, the catch is that it's really just borrowing DLEs from other
days that already were pretty undersized and now after the promotions
are left with negative balance figures, so the shuffling is not actually
solving the balance problem over the course of the full cycle

(I don't know the details of how the planner makes these choices, but I
would assume that that the reason it doesn't promote DLEs from one of
the positive-balance days is that those DLEs are all so big that you
can't promote any of them without making today's run too large.)

We might still notice something interesting as we watch the balance
reports over the course of the dump cycle, but off hand I don't think of
any advice to give you, other than the "break up your huge DLEs" advice
you've already heard


> NOTES:
>   planner: Incremental of coyote:/var bumped to level 6.
>   planner: Incremental of coyote:/home/gene/Mail bumped to level 2.
>   planner: Incremental of coyote:/home/amanda bumped to level 2.
>   planner: Full dump of coyote:/usr/local promoted from 6 days ahead.
>   planner: Full dump of coyote:/root promoted from 6 days ahead.
>   planner: Full dump of lathe:/etc promoted from 6 days ahead.
>   planner: Full dump of shop:/etc promoted from 6 days ahead.
>   planner: Full dump of GO704:/usr/local promoted from 6 days ahead.
>   planner: Full dump of coyote:/usr/bin promoted from 6 days ahead.
>   planner: Full dump of lathe:/home promoted from 6 days ahead.
>   planner: Full dump of GO704:/home promoted from 8 days ahead.
>   planner: Full dump of shop:/home promoted from 8 days ahead.
>   planner: Full dump of picnc:/boot promoted from 8 days ahead.
>   planner: Full dump of lathe:/usr/src promoted from 8 days ahead.
>   planner: Full dump of GO704:/root promoted from 6 days ahead.
>   planner: Full dump of shop:/lib/firmware promoted from 6 days ahead.
>   planner: Full dump of lathe:/lib/firmware promoted from 6 days ahead.
>   planner: Full dump of GO704:/lib/firmware promoted from 6 days ahead.
>   planner: Full dump of shop:/root promoted from 6 days ahead.
>   planner: Full dump of coyote:/usr/lib promoted from 5 days ahead.
>   planner: Full dump of lathe:/root promoted from 6 days ahead.
>   planner: Full dump of shop:/usr/lib/amanda promoted from 6 days ahead.
>   planner: Full dump of GO704:/usr/lib/amanda promoted from 6 days ahead.
>   planner: Full dump of coyote:/home/gene/Documents promoted from 5 days 
> ahead.
>   taper: tape Dailys-42 kb 18022402 fm 67 [OK
> 
> But the rest of the report shows that coyote /var was actually a level 0.
> coyote  /var  0   10971   10971   --   9:07 20543.3  2:55 64197.3

Okay, that does seem odd.

Normally I would say something about how the "bumped to level N" lines
don't tell you that Amanda is actually going to do an incremental on
that run, but only tells you what level it would plann to do if it ends
up deciding to do an incremental for that DLE.

However, off hand I would have expected colyote:/var to be mentioned in
the "promoted from N days ahead" lines, but it's not there (and all 21
promoted DLEs are listed, so it s

Re: following the "balance" report

2018-11-12 Thread Jon LaBadie
On Mon, Nov 12, 2018 at 06:40:56AM -0500, Gene Heskett wrote:
> amanda@coyote:/root$ /usr/local/sbin/amadmin Daily balance
> 
>  due-date  #fsorig MB out MB   balance
> --
> 11/12 Mon1  32963   7875-53.6%
> 11/13 Tue1   7688   7688-54.7%
> 11/14 Wed2  22109  22109+30.3%
> 11/15 Thu4  75027  46623   +174.9%
> 11/16 Fri4   3506   3506-79.3%
> 11/17 Sat   14  12127   7581-55.3%
> 11/18 Sun4  21281  16842 -0.7%
> 11/19 Mon   14  27513  15343 -9.6%
> 11/20 Tue1  49240  25295+49.1%
> 11/21 Wed   22  24718  16774 -1.1%
> --
> TOTAL   67 276172 169636 16963
>   (estimated 10 runs per dumpcycle)
> 
For personal interest I ran the "balance command" and
got a result I've never seen before:

$ amadmin DS1 balance

 due-date  #fsorig MB out MB   balance
--
11/12 Mon1 207811 207094   +534.4%
11/13 Tue1 135316  92641   +183.8%
11/14 Wed0  0  0  ---
11/15 Thu2  75363  46830+43.5%
11/16 Fri3  52415  22130-32.2%
11/17 Sat1  57080  48442+48.4%
11/18 Sun9  93945  29953 -8.2%
later6 249656 228031   +598.5%
--
TOTAL   17 621930 447090 63870
BALANCED   376137 228508 32644
DISTINCT23 871586 675121
  (estimated 7 runs per dumpcycle)
$

I've never noticed a "later" entry in the list.
I'm sure it is because I have some DLEs with
nearly static data and I give those DLEs long
dumpcycles (3 or 4 weeks).

That also is the explanation for the total DLEs
(#fs) being 17 while there are actually 23 DLEs.
Six of them have dumpcycles longer than 7 days,
my "normal" dumpcycle.

Jon
-- 
Jon H. LaBadie j...@jgcomp.com
 11226 South Shore Rd.  (703) 787-0688 (H)
 Reston, VA  20190  (703) 935-6720 (C)


following the "balance" report

2018-11-12 Thread Gene Heskett
amanda@coyote:/root$ /usr/local/sbin/amadmin Daily balance

 due-date  #fsorig MB out MB   balance
--
11/12 Mon1  32963   7875-53.6%
11/13 Tue1   7688   7688-54.7%
11/14 Wed2  22109  22109+30.3%
11/15 Thu4  75027  46623   +174.9%
11/16 Fri4   3506   3506-79.3%
11/17 Sat   14  12127   7581-55.3%
11/18 Sun4  21281  16842 -0.7%
11/19 Mon   14  27513  15343 -9.6%
11/20 Tue1  49240  25295+49.1%
11/21 Wed   22  24718  16774 -1.1%
--
TOTAL   67 276172 169636 16963
  (estimated 10 runs per dumpcycle)

Which is only microscopicly better. Snd the churn returns
USAGE BY TAPE:
  Label Time Size  %  DLEs Parts
  Dailys-42 0:04   17600M   39.16767


NOTES:
  planner: Incremental of coyote:/var bumped to level 6.
  planner: Incremental of coyote:/home/gene/Mail bumped to level 2.
  planner: Incremental of coyote:/home/amanda bumped to level 2.
  planner: Full dump of coyote:/usr/local promoted from 6 days ahead.
  planner: Full dump of coyote:/root promoted from 6 days ahead.
  planner: Full dump of lathe:/etc promoted from 6 days ahead.
  planner: Full dump of shop:/etc promoted from 6 days ahead.
  planner: Full dump of GO704:/usr/local promoted from 6 days ahead.
  planner: Full dump of coyote:/usr/bin promoted from 6 days ahead.
  planner: Full dump of lathe:/home promoted from 6 days ahead.
  planner: Full dump of GO704:/home promoted from 8 days ahead.
  planner: Full dump of shop:/home promoted from 8 days ahead.
  planner: Full dump of picnc:/boot promoted from 8 days ahead.
  planner: Full dump of lathe:/usr/src promoted from 8 days ahead.
  planner: Full dump of GO704:/root promoted from 6 days ahead.
  planner: Full dump of shop:/lib/firmware promoted from 6 days ahead.
  planner: Full dump of lathe:/lib/firmware promoted from 6 days ahead.
  planner: Full dump of GO704:/lib/firmware promoted from 6 days ahead.
  planner: Full dump of shop:/root promoted from 6 days ahead.
  planner: Full dump of coyote:/usr/lib promoted from 5 days ahead.
  planner: Full dump of lathe:/root promoted from 6 days ahead.
  planner: Full dump of shop:/usr/lib/amanda promoted from 6 days ahead.
  planner: Full dump of GO704:/usr/lib/amanda promoted from 6 days ahead.
  planner: Full dump of coyote:/home/gene/Documents promoted from 5 days 
ahead.
  taper: tape Dailys-42 kb 18022402 fm 67 [OK

But the rest of the report shows that coyote /var was actually a level 0.
coyote  /var  0   10971   10971   --   9:07 20543.3  2:55 64197.3

Sigh...

-- 
Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page