Re: Always fill every tape

2018-03-14 Thread Jason L Tibbitts III
> "JM" == Jean-Francois Malouin  
> writes:

JM> This is not was I have experienced, at least with 3.3.x.

For the record, my server is 3.5.1.

JM> I've been using the following in some configs with the expected
JM> result:

OK, it's good to know that at some point it's worked for someone.  I was
using "flush-threshold-scheduled 70" and "flush-threshold-dumped 90" but
it would always flush if there was data to flush even if it only filled
a small portion of the tape.  This led to a two-day cycle:

Day 1: Nothing to flush, dump to the holding disk.
Day 2: Flush data from yesterday and write today's dumps to tape,
   filling about 40% of a tape.  Given the thresholds, it should not
   have written anything for one or two more days.
repeat

I've upped those limits to match yours so we'll see what happens after a
few days.

 - J<



Re: Community ZWC Issue

2018-03-14 Thread Chris Nighswonger
Thanks Paddy. I also had to restart the ZWC service to make the log level
change effective.

So maybe someone who works on the ZWC code can explain what operation might
account for the following differences in log entries. Both are from the
same machine. The backup began failing with the 18/1/2018 run. I wonder if
ZWC is doing some sort of name resolution here which suddenly began to
fail? On the client, nslookup results are correct both ways (see below).
This same failure occurs on all of my ZWC installs.

ZWC working fine:

4424:384:17/1/2018:12:59:36:548::CZWCJobHandler : Entering
ExecuteValidateServerJob
4424:384:17/1/2018:12:59:36:548::CZWCJobHandler : Server address to be
validated from the list of authentic server = 192.168.x.x
4424:384:17/1/2018:12:59:36:548::CZWCJobHandler : Server name =
scriptor.foo.bar
4424:384:17/1/2018:12:59:36:548::CZWCJobHandler : Authentic Server name =
scriptor.foo.bar
4424:384:17/1/2018:12:59:36:548::CZWCJobHandler : Leaving
ExecuteValidateServerJob

ZWC borking:

5692:12004:14/3/2018:14:54:9:706::CZWCJobHandler : Entering
ExecuteValidateServerJob
5692:12004:14/3/2018:14:54:9:706::CZWCJobHandler : Server address to be
validated from the list of authentic server = 192.168.x.x
5692:12004:14/3/2018:14:54:9:706::CZWCJobHandler : Server name = 192.168.x.x
5692:12004:14/3/2018:14:54:9:706::CZWCJobHandler : Authentic Server name =
scriptor.foo.bar
5692:12004:14/3/2018:14:54:9:706::CZWCJobHandler : Server Validation Failed
- Server 192.168.x.x With IP 192.168.x.x is not registered
5692:12004:14/3/2018:14:54:9:706::CZWCJobHandler : Leaving
ExecuteValidateServerJob
5692:12004:14/3/2018:14:54:9:706::CZWCJobHandler:: Exiting ExecuteJob with
status = 65535

Client-side nslookup:

PS C:\Program Files\Zmanda\Zmanda Client for Windows Community
Edition(x64)\bin> nslookup 192.168.x.x
Server:  UnKnown
Address:  192.168.y.y

Name:scriptor.foo.bar
Address:  192.168.x.x

PS C:\Program Files\Zmanda\Zmanda Client for Windows Community
Edition(x64)\bin> nslookup scriptor.foo.bar
Server:  UnKnown
Address:  192.168.x.x

Name:scriptor.foo.bar
Address:  192.168.x.x

PS C:\Program Files\Zmanda\Zmanda Client for Windows Community
Edition(x64)\bin>

Kind regards,
Chris


On Fri, Mar 9, 2018 at 10:46 AM, Paddy Sreenivasan <
paddy.sreeniva...@gmail.com> wrote:

>
>1. Open the ZWC Config Utility: Start Menu → All Programs → Zmanda → Zmanda
>Client for Windows → ZWC Config Utility
>2. Select the Logging tab
>3. Set Log Level to 5
>4. Click Save
>5. Click Exit
>
>
>
> On Fri, Mar 9, 2018 at 5:59 AM, Chris Nighswonger <
> cnighswon...@foundations.edu> wrote:
>
>> Jean-Louis can you help with this? Is there a way to up the verbosity
>> level of the ZWC log?
>>
>>
>> On Tue, Feb 27, 2018 at 10:43 AM, Chris Nighswonger <
>> cnighswon...@foundations.edu> wrote:
>>
>>> No takers?
>>>
>>> The ZWC is always a hard one to gin up help for.
>>>
>>> An additional piece of information: DNS resolution seems to be working
>>> fine both server and client side.
>>>
>>>
>>>
>>> On Fri, Feb 23, 2018 at 8:34 AM, Chris Nighswonger <
>>> cnighswon...@foundations.edu> wrote:
>>>
 Any thoughts on what might be going on here?

 Last week I began to have numerous Win10 clients failing in this
 fashion:

 backup@scriptor:/home/manager amcheck -c campus shipping.foo.bar

 Amanda Backup Client Hosts Check
 
 ERROR: shipping.foo.bar: Server validation Failed. Please register
 server with client.
 Client check: 1 host checked in 0.072 seconds.  1 problem found.

 Taking a look at the ZWC debug log on that client, I see the following:

 11020:9144:23/2/2018:08:16:7:22::CZWCJobHandler : Entering
 ExecuteValidateServerJob
 11020:9144:23/2/2018:08:16:7:22::CZWCJobHandler : Server address to be
 validated from the list of authentic server = 192.168.x.x
 11020:9144:23/2/2018:08:16:7:22::CZWCJobHandler : Server name =
 192.168.x.x
 11020:9144:23/2/2018:08:16:7:22::CZWCJobHandler : Authentic Server
 name = scriptor.foo.bar
 11020:9144:23/2/2018:08:16:7:22::CZWCJobHandler : Server Validation
 Failed - Server 192.168.x.x With IP 192.168.x.x is not registered
 11020:9144:23/2/2018:08:16:7:22::CZWCJobHandler : Leaving
 ExecuteValidateServerJob

 However, the server is registered with the client:

 Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Zmanda\ZWC\1.0\Install\Servers
 value is 'scriptor.foo.bar'

 If I add the server IP (192.168.x.x) to the list of registered Servers,
 everything checks out fine:

 backup@scriptor:/home/manager amcheck -c campus shipping.foo.bar

 Amanda Backup Client Hosts Check
 
 Client check: 1 host checked in 0.071 seconds.  0 problems found.

 (brought to you by Amanda 3.3.6)

 And the ZWC debug log says:

 

Re: Always fill every tape

2018-03-14 Thread Jean-Francois Malouin
Hi Jason,

* Jason L Tibbitts III <ti...@math.uh.edu> [20180314 11:33]:
> And so it turns out that if you turn off autoflush, Amanda will never
> flush existing dumps to tape regardless of the flush-threshold-*
> settings.  Which I guess makes sense.  And with "autoflush yes", it
> seems to simply flush everything currently on the holding disk
> regardless of other settings.  So as far as I can tell, the
> flush-threshold-* settings are simply not working as designed.
> 
> What can I tweak to get more insight into how Amanda decides when to
> flush existing dumps?  I will continue to play with the thresholds to
> see if I can get behavior closer to I'm looking for but it would be good
> to have some insight into the decision process.
> 
>  - J<

This is not was I have experienced, at least with 3.3.x.
I've been using the following in some configs with the expected result:

flush-threshold-dumped 100
flush-threshold-scheduled 100
taperflush 100
autoflush yes 

As stated in the amanda.conf manpage, those will force amanda to start
writing to a new volume only if the data in the hold disk plus the
scheduled data are at least 100% of the volume capacity.  If anything is
left in the hold disk after a run, it will be flushed the next time
amdump runs. Those constraints only apply when a new volume is
requested.

Note that if you manually flush using amflush, you must override
those values with something like:

amflush -oflush-threshold-dumped=0 -oflush-threshold-scheduled=0 -otaperflush=0 
...

otherwise the flush constraints might not be met and the flush won't
complete entirely or at all.

Of course, I might be wrong so take this with a grain of salt!

cheers,
jf


Re: Always fill every tape

2018-03-14 Thread Jason L Tibbitts III
And so it turns out that if you turn off autoflush, Amanda will never
flush existing dumps to tape regardless of the flush-threshold-*
settings.  Which I guess makes sense.  And with "autoflush yes", it
seems to simply flush everything currently on the holding disk
regardless of other settings.  So as far as I can tell, the
flush-threshold-* settings are simply not working as designed.

What can I tweak to get more insight into how Amanda decides when to
flush existing dumps?  I will continue to play with the thresholds to
see if I can get behavior closer to I'm looking for but it would be good
to have some insight into the decision process.

 - J<


Re: "vaulting run" amdump email report issues -- "amdump --no-dump --no-flush" has MISSING in subject line

2018-03-14 Thread Nathan Stratton Treadway
On Wed, Nov 22, 2017 at 11:58:12 -0500, Jean-Louis Martineau wrote:
> On 22/11/17 12:56 AM, Nathan Stratton Treadway wrote:
> >> I applied this patch and ran the test again, and "amdump --no-dump 
> >> 
> > --no-flush" worked just as well as with the previous patch, but showed  
> >
> > no "dumper lines" in the log.*.0 file.  
> >
> > (Of course the email report did still have the unexpected RESULTS
> > MISSING line for each DLE.)
> I already committed a fix for that, use SVN or GIT.
> 

Sorry to revive this old thread, but I'm finally getting a chance to
look at this carefully under v3.5.1.


The context of this discussion is a configuration where "vault-storage"
is set, and the main storage had a "vault" line pointing to that vault
storage.  If I then run a "normal" amdump without any vault vtapes being
available, this leaves a situation where there is a pending "vault"
operation for each affected DLE..


When I mounted a suitable vault vtape and ran the v3.5.1 "amdump
-no-dump --noflush" command to perform the vaulting, the copy proceeded
as expected... and the Amanda mail report message did not have the
RESULTS MISSING line for each DLE that used to appear, and generally
looked correct.

However, the one thing that did not quite seem correct is that the
Subject line of the email message _did_ have the word "MISSING" in it...

Included below is a (shorted) version of the email, in case that's
helpful in tracking down what is causing that "MISSING".

Nathan

=
To: natha...@ontko.com
Subject: TestBackup MISSING: AMANDA MAIL REPORT FOR February 21, 2018
Date: Wed, 21 Feb 2018 11:00:40 -0500

Hostname: tumhalad
Org : TestBackup
Config  : TestBackup
Date: February 21, 2018

These dumps to storage 'TestOffsite' were to tape TESTBACKUP-108.
The next tape Amanda expects to use for storage 'TestBackup' is: TESTBACKUP-06.
The next tape Amanda expects to use for storage 'TestOffsite' is: 
TESTBACKUP-109.


STATISTICS:
  Total   Full  Incr.   Level:#
        
Estimate Time (hrs:min) 0:00
Run Time (hrs:min)  0:01
Dump Time (hrs:min) 0:00   0:00   0:00
Output Size (meg)0.00.00.0
Original Size (meg)  0.00.00.0
Avg Compressed Size (%)  -- -- --
DLEs Dumped0  0  0
Avg Dump Rate (k/s)  -- -- --

Tape Time (hrs:min) 0:01   0:01   0:00
Tape Size (meg)   7106.0 6813.9  292.1
Tape Used (%)3.53.30.1
DLEs Taped 9  3  6  1:6
Parts Taped9  3  6  1:6
Avg Tp Write Rate (k/s)  92812.991808.4 124623


USAGE BY TAPE:
  Label Time Size  %  DLEs Parts
  TESTBACKUP-1080:017106M3.5 9 9


NOTES:
  taper: Slot 3 with label TESTBACKUP-03 is usable
  taper: Slot 8 with label TESTBACKUP-108 is usable
  taper: tape TESTBACKUP-108 kb 7276536 fm 9 [OK]


DUMP SUMMARY:
 DUMPER STATS   TAPER STATS
HOSTNAME   DISK   L  ORIG-MB   OUT-MB COMP% MMM:SSKB/s MMM:SSKB/s
--- --- -- --
client1/  0  2093   --  VAULT0:22 97401.6
client2/  1 4   --  VAULT0:00 41540.0
[... similar lines for each DLE ...]
=




Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
Ray Ontko & Co.  -  Software consulting services  -   http://www.ontko.com/
 GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
 Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239


Re: constant crash on arm64

2018-03-14 Thread Gene Heskett
On Friday 09 March 2018 12:20:29 Gene Heskett wrote:

> On a rock64 TBE. Locally built 3.3.7p1 here as master, client is
> whatever is in the debian stretch repo's.
>
> I will reboot it, and compose an excludes file by the time another
> dump starts. There is an excludes file now, but it exists because it
> was touched, by amanda. That got rid of the last amcheck fuss.
>
> More tommorrow I expect. Its running fine on a pi, which is armhf. But
> this arm64 seems to be a bit fussier. It also has 20x the i/o
> bandwidth the pi has.

Failed to backup my /home/gene directory about 30 hrs back, suggested I 
add a splitsize. Couldn't find that except as tape_splitsize. Added that 
to the .conf.

This morning I get 4 error msgs saying its been deprecated. This is the 
same install and versions that been used here, for several years.

>From the email from amanda:

USAGE BY TAPE:
  Label Time Size  %  DLEs Parts
  Dailys-10 0:10   45887M   91.86486

NOTES:
  planner: Incremental of coyote:/home/gene bumped to level 5.
  taper: tape Dailys-10 kb 46987825 fm 86 [OK]
  big estimate: coyote /home/gene 0
  est: 84798Mout 45203M

However! Again from the email:
-
DUMP SUMMARY:
DUMPER STATS   
TAPER STATS
HOSTNAM DISKL ORIG-MB  OUT-MB COMP% MMM:SSKB/s 
MMM:SSKB/s
- - - 
-
coyote  /home/gene  0  145740   45203  31.0 166:31  4632.9  
9:49 78587.0
-
Note level 0 above.

3.3.7p1 on the server, which is also the client. I had added a 3rd vtape, and 
made the vtape's 50GB
That however will quickly fill the vtape drive. So I need to restore the 34GB 
size of a vtape.

The error messages:
"/usr/local/etc/amanda/Daily//amanda.conf", line 488: warning: Keyword 
tape_splitsize is deprecated.
"/usr/local/etc/amanda/Daily//amanda.conf", line 488: warning: Keyword 
tape_splitsize is deprecated.

So what am I supposed to setup so it can splice a dump across 2 or more vtapes? 
Apparently the examples in the 3.3.7p1 tarball aren't exactly up to date.

Thanks.

Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page