amrecover: use wildcards etc

2024-02-29 Thread Stefan G. Weichinger



I once posted about german umlauts and the issues in amrecover.

Today I hit that again.

In the amrecover UI I found two files in a specific DLE:

2024-01-15-19-30-02 "SOME & FILE Name �bersicht.xlsx"
2024-01-09-19-30-01 "SOME & FILE Name �bersicht.pdf"

note the quotes around the names

for other files there are not quotes, for some there are

I think the "&" and the Umlaut "Ü" plays a role.

My goal: restore the xlsx only.

I was NOT able to select ONLY the xlsx file above by using "add" or "addx"

add RE*.xlsx
addx RE*.xlsx
add "RE*.xlsx"
addc "RE*.xlsx"

etc etc etc

I used "\" to escape the special chars ...

It's very likely to me that this hasn't been tested much and might be 
simply buggy.


Any hints, someone?

-

I recovered both files now to get on with it, having to restore a 
not-used pdf-file from an extra tape on the way. Not very elegant.








Re: changer for more than one usb disk device

2023-11-19 Thread Stefan G. Weichinger

On 16.11.23 22:08, Moritz Both wrote:

Greetings,

we use amanda with usb disks and manually change them every day. I 
configured udev to create a symlink to /dev/amanda-vtape when the disk 
is plugged in.


In /etc/fstab I configured this device so it can be mounted to 
/mnt/amanda-vtape by the amanda user. This works for many years very 
well. Changer configuration is:


define changer mnt-amanda-vtape {
    tpchanger "chg-disk:/mnt/amanda-vtape"
    property "num-slot" "1"
    property "auto-create-slot" "yes"
    property "removable" "yes"
    property "mount" "yes"
    property "umount" "yes"
    property "umount-lockfile" "/var/lib/amanda/amanda-vtape.lock"
    property "umount-idle" "1"
    device-property "ENFORCE_MAX_VOLUME_USAGE" "false"
}

Nowerdays the people changing the disks frequently work at home. So it 
would be nice to plug in more than one disk and have a tape changer scan 
for more than one disk. I understand there is no changer included in 
amanda which is directly capable of that.


I tried to use chg-aggregate, duplicated the above changer, but changed 
the mount point and lock file, and created this changer:


define changer mnt-amanda-morethanone-vtape {
     tpchanger "chg-aggregate:{mnt-amanda-vtape,mnt-amanda-vtape2}"
     property "state-filename" "/var/lib/amanda/aggregate-state"
}

Experiments with this suggest that on failure to mount a disk for the 
first chg-disk, it aborts with error and does not try the other one:


root@peapod:~# su backup -s /bin/bash -c "/usr/sbin/amcheck DailySet1"
Amanda Tape Server Host Check
-
NOTE: Holding disk '/mnt/amandaholdingdisk/hd/DailySet1': 2960 MB disk 
space available, using 2960 MB

mount: /mnt/amanda-vtape2: must be superuser to use mount.
ERROR: mnt-amanda-vtape2: No removable disk mounted on '/mnt/amanda-vtape2'
Server check took 0.349 seconds

Shoudn't the aggregate changer try the next changer in this case? (The 
error message "must be superuser" can be ignored, when the device 
symlink from udev exists, it works.)


Also, I cannot think of a way to tell udev to create stable distinct 
device symlinks for more than one arbitrary usb disk. I could use 
/dev/sdc, /dev/sdd... but this is not very reliable if someone adds a 
disk to the system. It would be possible to create nodes like 
/dev/amanda-vdisk-3.1.2-0... or so (using the linux system device path) 
but how could amanda find these? After all, chg-disk needs the mount 
point, not the device node.


Current amanda version: 3.5.1
System: Debian bullseye

udev config:
KERNEL=="sd[a-z]", SUBSYSTEMS=="usb", DRIVERS=="usb-storage", 
SYMLINK+="amanda-vtape-disk", OWNER="backup", GROUP="backup


/etc/fstab:
/dev/amanda-vtape   /mnt/amanda-vtape   ext4 
noauto,owner,nofail 0 2



Any hints? Thanks in advance.


just a quick reply without details:

I use chg-aggregate with multiple external USB drives for years now

I don't do udev here ... I create fstab-entries with the fs-UUIDs

If needed I can provide more details.



Re: multiple LTOs in the same library

2023-05-16 Thread Stefan G. Weichinger

Am 16.05.23 um 16:34 schrieb ASV:

Thanks exuvo, I'll try to dig a bit in that direction even though I was
expecting some built-in functionality for that.


Are the LTO7-tapes always in defined slots?

As far as I understand all *drives* are LTO8, but not all *tapes* ?







Last full dump overwritten

2023-04-11 Thread Stefan G. Weichinger



In an installation with two storage definitions with two POLICY 
definitions I often get report lines like this:


 planner: Last full dump of samba.some.tld:boot on tape ARC948 
overwritten in 3 runs.


As far as I know this means I have less tapes in rotation than 
"runspercycle".


It seems we can only have one global definition for "runspercycle", but 
multiple "RETENTION-TAPES" in multiple POLICY-definitions.


Maybe the concept of multiple storages wasn't realized fully here.

goal:

I have ~50 tapes for the storage "daily" and ~10 tapes for the storage 
"archive".


"runspercycle" is 12 right now (2 weeks with 6 runs each).

For "daily" that's ok.
For "archive" it's not.

I tried

runspercycle -1 # as mentioned in the manpage

-

Maybe I have to give up trying to use 2 storage definitions in one 
amanda configuration.




Re: btrfs snapshots

2023-03-20 Thread Stefan G. Weichinger

Am 20.03.23 um 07:38 schrieb Kees Meijs | Nefos:

Hi there,

On 19-03-2023 18:37, Stefan G. Weichinger wrote:


Do / did you run fileservers on btrfs? 


Yes. There's a back up server (runnning something other then AMANDA) 
with a 40-something TB volume with btrfs. Quite often (like every other 
month) we notice btrfs driver issues. Mostly this is a kernel oops, then 
a reboot (watchdog) and then minutes and minutes of remounting after the 
machine was rebooted. Then a timeout since the mount took to long, and 
then manual intervention.


What OS / distribution is that? What type of btrfs pool ("raid level")?

Insofar I can tell no loss yet in terms of data at rest, but obviously 
all state and running back-ups are effected.


The oops'es seen mostly are in the context of cache or CoW related. Just 
disabling free block cache and such, or other btrfs features, didn't 
really help. It's noteworthy there are snapshots involved; 75 ~ 90 of them.


In general, we decided to move away from btrfs. Some systems are 
migrated to ext4 (sometimes with LVM), some to ZFS.


Hm, I see. I am currently considering using btrfs for a new file server, 
so that is interesting to me. That box will be smaller, maybe 8 TB or 
so, running Debian 11.x (whatever is stable in a few months).


btrfs gets quite some development lately, but that is with latest 
kernels: debian stable won't come with 6.2 or something.


thanks for that information



Re: using Google Cloud for virtual tapes

2023-03-20 Thread Stefan G. Weichinger

Am 27.06.22 um 10:21 schrieb Stefan G. Weichinger:

Am 03.06.22 um 09:13 schrieb Stefan G. Weichinger:

I now at last received credentials to that gcs storage bucket, so I 
can start to try ...


Does it make sense to somehow follow 
https://wiki.zmanda.com/index.php/How_To:Backup_to_Amazon_S3 ?


I don't find anything mentioning Google Cloud in the Wiki.


*bump*


Still wondering.

Now with amanda-3.5.3 on github this might also bring updated code for 
the S3 stuff ... at least Chris told me back than that 3.5.2 would fix 
something related to my Google-Cloud-needs.


I have a gc-service-account key and two bucket names ... and I'd love to 
hook them up with amanda straight away.


Anyone already done that?

ps: built 3.5.3 deb packages today in a test VM


Re: btrfs snapshots

2023-03-19 Thread Stefan G. Weichinger

Am 17.03.23 um 09:20 schrieb Kees Meijs | Nefos:

Hi Stefan,

I'd recommend against btrfs and go for ZFS instead (also on Linux). 
Although I was a (really) great fan of btrfs; it just seemed to have too 
much bugs, sometimes even resulting in loss of data.


My experience is different, my btrfs-pools didn't lose any data for years.

Do / did you run fileservers on btrfs?


btrfs snapshots

2023-03-17 Thread Stefan G. Weichinger



Does anyone successfully use btrfs snapshots with amanda?

Like in doing LVM snapshots pre-amdump etc ?


Re: 2 CVEs in amanda-3.5.1

2023-02-13 Thread Stefan G. Weichinger

Am 22.01.23 um 09:25 schrieb Stefan G. Weichinger:


Just in case this hasn't yet been noticed by anyone upstream:

https://github.com/MaherAzzouzi/CVE-2022-37704

https://github.com/MaherAzzouzi/CVE-2022-37705

https://github.com/zmanda/amanda/issues/192


PRs on their way, as it seems:

https://github.com/zmanda/amanda/pull/197

https://github.com/zmanda/amanda/pull/198


I asked for distro packages, read this:

https://github.com/zmanda/amanda/issues/192#issuecomment-1427097294

Maybe it helps to cheer for the packages there (vote/like/ask there?).



Re: amflush and systemd-oomd

2023-02-05 Thread Stefan G. Weichinger

Am 05.02.23 um 15:39 schrieb Uwe Menges:

Hi,

I just updated my backup server from Fedora 35 to 37.
After the upgrade, I tried to run amflush.

The session running amflush was killed by systemd-oomd as per these
lines in the journal:

systemd[1]: session-5.scope: systemd-oomd killed some process(es) in 
this unit.

systemd-oomd[7544]: Considered 4 cgroups for killing, top candidates were:
systemd-oomd[7544]: Path: /user.slice/user-0.slice/session-5.scope
systemd-oomd[7544]: Memory Pressure Limit: 0.00%
systemd-oomd[7544]: Pressure: Avg10: 56.78 Avg60: 40.05 
Avg300: 14.53 Total: 53s

systemd-oomd[7544]: Current Memory Usage: 30.0G
systemd-oomd[7544]: Memory Min: 0B
systemd-oomd[7544]: Memory Low: 0B
systemd-oomd[7544]: Pgscan: 14100751

My workaround was to "systemctl edit systemd-oomd" and add "--dry-run" 
as argument

(https://www.freedesktop.org/software/systemd/man/systemd-oomd.service.html).
A "systemctl cat systemd-oomd" now has this override appended:

# /etc/systemd/system/systemd-oomd.service.d/override.conf
[Service]
ExecStart=
ExecStart=/usr/lib/systemd/systemd-oomd --dry-run

After "systemctl restart systemd-oomd", this makes the log look like 
this when running amflush:


systemd-oomd[13095]: oomd dry-run: Would have tried to kill 
/sys/fs/cgroup/user.slice/user-0.slice/session-3.scope with recurse=true

systemd-oomd[13095]: Considered 4 cgroups for killing, top candidates were:
systemd-oomd[13095]: Path: /user.slice/user-0.slice/session-3.scope
systemd-oomd[13095]: Memory Pressure Limit: 0.00%
systemd-oomd[13095]: Pressure: Avg10: 54.15 Avg60: 35.63 
Avg300: 15.82 Total: 1min 25s

systemd-oomd[13095]: Current Memory Usage: 29.9G
systemd-oomd[13095]: Memory Min: 0B
systemd-oomd[13095]: Memory Low: 0B
systemd-oomd[13095]: Pgscan: 40620400
systemd-oomd[13095]: Last Pgscan: 40620400

And now amflush keeps running fine again.

I'm not sure what the right solution is, I just quickly wanted to get it
running again, and my itch is scratched.

Maybe something needs to be tweaked in systemd-oomd, or the amanda
package should include something to exempt amanda from systemd-oomd, or
something else.

Yours, Uwe



A quick google finds this:

https://fedoraproject.org/wiki/Changes/EnableSystemdOomd#Can_we_exclude_certain_units_from_being_killed?

So maybe you add a systemd override conf for the amanda.service (or 
however it is called in F37).


You basically disabled systemd-oomd, it would have been even easier to 
run "systemctl disable --now systemd-oomd", I assume.


I am not sure if amflush triggers amandad, I would have to look that up 
(but I am on a train right now, so it's a bit complicated).


It should be rather easy to find the killed cgroups in the logs 
somewhere. From there you might find the related service.


I am using F37 too, but currently I don't have amanda installed on these 
machines.


Maybe it makes sense to come up with some fine-tuned service files and 
contact the Fedora developer maintaining Amanda.


I wonder when and if this feature comes to other distros as well, I 
haven't yet noticed it in Debian, for example.


regards, Stefan



2 CVEs in amanda-3.5.1

2023-01-22 Thread Stefan G. Weichinger



Just in case this hasn't yet been noticed by anyone upstream:

https://github.com/MaherAzzouzi/CVE-2022-37704

https://github.com/MaherAzzouzi/CVE-2022-37705

https://github.com/zmanda/amanda/issues/192



Re: Degraded dump in amanda 3.5.2

2023-01-20 Thread Stefan G. Weichinger

Am 18.01.23 um 14:09 schrieb Pablo Venini:

This is the amanda.conf

org "monitoreo10_diario"    # your organization name for 
reports

dumpuser "amandabackup" # the user to run dumps under
mailto "xxx...@y.zz"    # space separated list of operators at 
your site

dumpcycle 1week # the number of days in the normal dump cycle
tapecycle 8 # the number of tapes in rotation
runspercycle 7  # the number of amdump runs in dumpcycle days

define changer my_vtapes {
     tpchanger "chg-disk:/backups/amanda/vtape/diarios/monitoreo10_diario/"
     property "num-slot" "8"
     property "auto-create-slot" "yes"
}
dtimeout 1800   # number of idle seconds before a dump is aborted
ctimeout 30 # max number of secconds amcheck waits for each client
etimeout 300    # number of seconds per filesystem for estimates
define dumptype global {
    comment "Global definitions"
    auth "bsdtcp"
}
define dumptype gui-base {
    global
    program "GNUTAR"
    comment "gui base dumptype dumped with tar"
    compress none
    index yes
}
define tapetype HARDDISK {
    comment "Virtual Tapes"
    length 5 gbytes
}

define policy monitoreo10_diario {
     retention-tapes   7
     retention-days    0
     retention-recover 0
     retention-full    0
}

define storage monitoreo10_diario {
     policy "monitoreo10_diario" # the policy
     tapepool "monitoreo10_diario"   # the tape-pool
     tpchanger "my_vtapes"   # the changer
     runtapes 1  # number of tapes to be used in a single run of 
amdump

     tapetype "HARDDISK" # what kind of tape it is
     labelstr "monitoreo10_diario"   # label constraint regex: all 
tapes must match

     #autolabel
     #meta-autolabel
     taperscan "traditional"
     #max-dle-volume 100
     #taperalgo first
     #taper-parallel-write 1
}
storage "monitoreo10_diario"


Nothing stands out here.


includefile "advanced.conf"
includefile "/etc/amanda/template.d/dumptypes"
includefile "/etc/amanda/template.d/tapetypes"


What's in there? Maybe something crossfiring?

Maybe you also post the output of:

amadmin yourconf config

so we can see what is interpreted by amanda.

And try 3.5.1 maybe. I think noone knows about the bugs of 3.5.2 yet.



Re: Degraded dump in amanda 3.5.2

2023-01-20 Thread Stefan G. Weichinger

Am 18.01.23 um 14:12 schrieb Pablo Venini:
Yes, I was also thinking about installing 3.5.1; I installed 3.5.2 from 
RPM so I thought it was ok (I was reinstalling an old server wich ran 
3.3.9).


I tried your suggestion and if I run amdump for each DLE, the dumps are ok


Why don't you just set up a small holding disk?

I understand the intention to make it work without but I don't see the 
advantage of doing so.


Amanda is designed to use a holding disk, it's just recommended and best 
practice ;-)







Re: Degraded dump in amanda 3.5.2

2023-01-18 Thread Stefan G. Weichinger

Am 18.01.23 um 01:47 schrieb Pablo Venini:
Hi Stefan, sorry for the delay in answering. This is a backup of the 
amanda server itself (configs, etc.), there is less than 100MB of 
information. What worries me is that if I use a holding disk without 
correcting the problem the dumps will go there instead of going to the 
vtapes.


Did you also test this config with amanda-3.5.1? 3.5.2 is rather 
untested, I think, as there are still no official packages available afaik.


I would define a small holding disk and see if that helps ... just for 
debug purposes. A vtape-setup also benefits from a holding disk, no 
reason to not use that with virtual tapes.


What if you only dump specific DLEs?

amdump myconf /etc







Re: Degraded dump in amanda 3.5.2

2023-01-18 Thread Stefan G. Weichinger

Am 18.01.23 um 01:43 schrieb Pablo Venini:
Hi Nathan, do you know if there is any setting to cause amanda to be 
more verbose?


Another weird thing is that the report says:

NOTES:
   planner: tapecycle (7) <= runspercycle (7)

However the config is like this:

tapecycle 8 # the number of tapes in rotation
runspercycle 7  # the number of amdump runs in dumpcycle days

the value reported for tapecycle is different than the one configured.


Maybe you show your amanda.conf for a start. This sounds wrong, yes.



Re: Degraded dump in amanda 3.5.2

2023-01-05 Thread Stefan G. Weichinger

Am 30.12.22 um 15:34 schrieb Pablo Venini:

Hi Nathan,

     no, (as far as I'm aware) I'm not using a holding 
disk as this is a vtape backup, the dumptype is as follows:


You could fix that by defining and using a holding disk.

Then a "degraded mode backup" should work.

The question is "why does it go into degraded mode" ?

I only quickly scan your config:

* your tapetype defines a tape with 5 GB

Would your DLEs each fit onto one vtape?

I think you should use a larger vtape ... and make sure that the full 
backup of each DLE fits onto one vtape.


A holding disk wouldn't hurt also: speed up things etc








Re: what leads to a "new disk" ?

2022-12-05 Thread Stefan G. Weichinger

Am 05.12.22 um 19:41 schrieb Jose M Calhariz:


I wasn't aware that doing this "resets history" ... I just want to have lev0
runs only on that day


It blocks incremental backups to run until a lev0 is run sucessfully.
Now I only do a "amdmin force " when I know I have lost the previous
lev 0 or incrementals are broken.


So does it reset the history of the algorithm then?

My goal is to let the algorithm do its work and write fulls and 
incrementals to storage1 during the week, but "collect" fulls for the 
archive runs on sunday (and write those to storage2).



See my config for that "storage2":

define storage archive {
tapepool "archive"
policy "archive"
tapetype "LTO6"
tpchanger "robot_arc"
autolabel "ARC%b"
runtapes 4
dump-selection ALL FULL
flush-threshold-dumped  100 # (or more)
flush-threshold-scheduled   100 # (or more)
taperflush  0
}


"dump-selection" should only write fulls to these tapes.

But if I don't force fulls on sunday, I assume there would be only a few 
fulls scheduled by the normal algorithm and I wouldn't get fulls of ALL 
DLEs (which is the goal).


I am perfectly aware that I try to work against the original scheduling 
of amanda here. But I think it should be possible to achieve what I need 
here.


-

I assume this might be also related to my "forcing":

NOTES:
  planner: Last full dump of samba.xx.yy:smb-etc on tape ARC940 
overwritten in 3 runs.


How to interpret that?

thanks all



Re: what leads to a "new disk" ?

2022-12-05 Thread Stefan G. Weichinger

Am 05.12.22 um 15:03 schrieb Jose M Calhariz:

On Thu, Dec 01, 2022 at 10:18:03AM +0100, Stefan G. Weichinger wrote:


I have an installation where I didn't add or remove DLEs for a long time.

But now an then amanda seems to "forget" a DLE and come up with something
like:

samba.intra rootfs lev 0  FAILED [dumps too big, 42606931 KB, but cannot
incremental dump new disk]

The DLE is NOT new. Where does that come from?



By chance have you made a "amadmin Config force samba.intra rootfs"
just before that?


correct ;-)

I do that in my crontab:

30 19 * * 1-5 /usr/sbin/amdump myconfig -ostorage=storage1; /bin/mt-st 
-f /dev/nst0 offl


30 02 * * 7 /usr/sbin/amadmin myconfig force; /usr/sbin/amdump myconfig 
-ostorage=storage2



I wasn't aware that doing this "resets history" ... I just want to have 
lev0 runs only on that day


thx






Re: what leads to a "new disk" ?

2022-12-05 Thread Stefan G. Weichinger

Am 05.12.22 um 06:47 schrieb Nathan Stratton Treadway:

On Thu, Dec 01, 2022 at 10:18:03 +0100, Stefan G. Weichinger wrote:


I have an installation where I didn't add or remove DLEs for a long time.

But now an then amanda seems to "forget" a DLE and come up with
something like:

samba.intra rootfs lev 0  FAILED [dumps too big, 42606931 KB, but
cannot incremental dump new disk]

The DLE is NOT new. Where does that come from?


Looks like the source file server-src/planner.c generates that message
if the "last_level" data element for the DLE is negative...

What does "amadmin  info " report for that DLE (during the
period when you are getting this message, i.e. before the next
successful full dump takes place)?


Well, I have to wait for that, I assume.

On the weekend I forced a lev0 for all DLEs to get a full archive set 
onto 4 tapes.


So I have something like:

Dumps: lev datestmp
   0   20221204

for all the DLEs right now.


Let me add that this is my "multi storage" setup:

I have 2 storages with separate tape sets, and different dumpcycles.

~50 tapes for the daily backups

~16 for the weekends (where I use 4 tapes per run)






what leads to a "new disk" ?

2022-12-01 Thread Stefan G. Weichinger



I have an installation where I didn't add or remove DLEs for a long time.

But now an then amanda seems to "forget" a DLE and come up with 
something like:


samba.intra rootfs lev 0  FAILED [dumps too big, 42606931 KB, but cannot 
incremental dump new disk]


The DLE is NOT new. Where does that come from?


Re: amvault hangs

2022-11-24 Thread Stefan G. Weichinger

Am 22.11.22 um 17:13 schrieb Christian:

Hello,

I'm running a very small amanda installation to create regular backups 
of local files and also involves archiving at some point. For archiving 
files, I run a full backup that ends up in holding and then I run 
amvault twice to write the backup set to two different disk changers. 
Finally, I set no-reuse on the created media and delete the backup from 
holding.


However, for quite some time now, amvault does not work anymore. I 
believe it was after the upgrade to 3.5.1 that it stopped working.


Unfortunately I can't help with amvault. I never was successful or happy 
with that tool.


In general I suggest you tell us more about which OS you use amanda with 
and something about your amanda configuration.


I went for defining multiple storages in amanda.conf.

Maybe useful for you as well.

Check the thread "Multiple storage-definitions, generating archive 
tapes" if interested.




Re: Multiple storage-definitions, generating archive tapes

2022-10-30 Thread Stefan G. Weichinger

Am 30.10.22 um 16:32 schrieb Exuvo:
Are you not supposed to do all the -Letter options before the 
configuration name? Is that maybe why it does not work? I.e: amflush

-o storage=storage1 configurationNameHere




No.

amflush config -o storage=storage1

works fine so far.

My question is: how can I check to which storage I have to flush things 
without starting amflush?


The policies for the 2 storages are different so the holding disk files 
of some DLEs go to storage1, others to storage2, but I don't know where 
amflush would put them to in advance.




Re: Multiple storage-definitions, generating archive tapes

2022-10-24 Thread Stefan G. Weichinger



So far my setups with multiple storages work.

One of the missing features (or I don't know how):

if dumps stay in the holding disk I don't know which storage they would 
go to via amflush.


I start

amflush config -o storage=storage1

and then see in amstatus, that all DLEs would go to storage2

amflush stops then, OK, and I can start over, but sometimes I'd like to 
check that because there are some little DLEs for one storage and many 
DLEs for the other etc


Is there a way to check that without starting amflush?


Re: dumporder

2022-10-04 Thread Stefan G. Weichinger

Am 06.11.18 um 19:18 schrieb Chris Nighswonger:

This seems to work:

amplot /var/backups/campus/log/amdump.1

Running under the amanda user.

However, the issue now is the attempt to write the output to the X11 terminal:


gnuplot: unable to open display ''
gnuplot: X11 aborted.

Not sure what all that's about. So I'm doing a bit of hacking on the
gnuplot script to have it write the results out to a png file.


Did you succeed? ;-)

I currently try to use amplot on Debian 11.5 and get a ps file where the 
margins don't match the text etc


I also think of generating a png-file after every amdump run and mailing 
it as well.






Re: New Amanda Community release 3.5.2 Has Arrived! -- email bounce

2022-09-30 Thread Stefan G. Weichinger

Am 29.09.22 um 02:03 schrieb Nathan Stratton Treadway:

On Tue, Sep 13, 2022 at 16:11:15 -0400, Nathan Stratton Treadway wrote:

On Tue, Sep 13, 2022 at 17:29:41 +0200, Stefan G. Weichinger wrote:

I received:

"Your message to chris.hass...@betsol.com couldn't be delivered.

Chris.Hassell wasn't found at betsol.com."


(I just tried sending email to this email address.  My message was
accepted for delivery by the Betsol mail server, and I haven't received
any bounce message back [after waiting a few minutes].  So hopefully the
bounce you saw was just a temporary misconfiguration on the Betsol mail
server...(?) )


Ack -- I just discovered that my 9/13 test message did result in a
bounce message after all.  (The bounce message went to my spam
folder.)

I tried again just now, and his email address still bounced.

So it would seem that Chris is indeed no longer at Betsol  :(


Ah, that's bad. So again no more direct upstream contact at Betsol.

Still no packages provided by them for 3.5.2, still no activity at the 
Github-issues, etc etc


Sad.

Let's see who follows.

Thanks for the test, Nathan.



Re: Amanda Community release 3.5.2 - debian test packages

2022-09-19 Thread Stefan G. Weichinger

Am 16.09.22 um 17:52 schrieb gene heskett:

The first thing I note is the use of port 5000, and I'm wondering about 
the possibility of fighting
over the ports as Octoprint uses FQDN:5000 as its web server to 
administer its designated 3d printer.


I expect its changeable, and I've done it to Octoprint just for test, 
works fine. But is that a potential
problem if octoprint is sitting idle on a client at the time amanda gets 
fired up? Something to check

out I think, Stefan. Thanks.


Where do you see that port 5000?

I grep through the sources and I am not sure ...

my installation in a test VM seems to work so far, although not yet 
fully compatible to the Debian packages, as Jose mentioned




Re: Amanda Community release 3.5.2 - debian test packages

2022-09-19 Thread Stefan G. Weichinger

Am 16.09.22 um 16:55 schrieb gene heskett:

On 9/16/22 07:28, Stefan G. Weichinger wrote:
I might rebuild later if needed, just wanted to share asap. Right now 
I am quite busy with X things.


.

I agree, Wayland in place of X needs all the help it can get.


I didn't mean X11 ;-)  .. wanted to say something like "busy with many 
things"


No issues with Wayland here, btw.



Amanda Community release 3.5.2 - debian test packages

2022-09-16 Thread Stefan G. Weichinger

Am 02.08.22 um 17:26 schrieb Jose M Calhariz:


I am not doing progress in solving the autotools problems.

I copied debian directory from official Debian 3.5.1, refresh patches,
and "debuild".  I fails with alot of warnings and this relevant error:

automake: error: cannot open < config/amanda/file-list: No such file or 
directory
autoreconf: automake failed with exit status: 1
find ! -ipath "./debian/*" -a ! \( -path '*/.git/*' -o -path '*/.hg/*' -o 
-path '*/.bzr/*' -o -path '*/.svn/*' -o -path '*/CVS/*' \) -a  -type f -exec md5sum {} + -o 
-type l -printf "symlink  %p
" > debian/autoreconf.after
dh_autoreconf: autoreconf -f -i returned exit code 1
make: *** [debian/rules:41: build] Error 2
dpkg-buildpackage: error: debian/rules build subprocess returned exit status 2
debuild: fatal error at line 1182:
dpkg-buildpackage -us -uc -ui -i -ICVS -I.svn failed


I was able to build debian packages today. This is straight from the 
github repository, I only applied one patch to the 3.5.2 code to make it 
work (that patch is in 3.6 beta already).


I decide to share the packages all you amanda-users, just to get some 
testing and reviewing going:


https://oc.oops.co.at/nextcloud/s/kRbZ82ikQ4a8gjn

! these are no official upstream packages or so ! I cannot guarantee for 
anything :-) !


Looking forward to any test results or so. Pls only use these packages 
in TEST environments.


one more:

I remember differences between upstream and debian: backup user etc

I might rebuild later if needed, just wanted to share asap. Right now I 
am quite busy with X things.




Re: New Amanda Community release 3.5.2 Has Arrived! -- email bounce

2022-09-14 Thread Stefan G. Weichinger

Am 13.09.22 um 22:11 schrieb Nathan Stratton Treadway:


(I just tried sending email to this email address.  My message was
accepted for delivery by the Betsol mail server, and I haven't received
any bounce message back [after waiting a few minutes].  So hopefully the
bounce you saw was just a temporary misconfiguration on the Betsol mail
server...(?) )


Interesting, thanks for testing.

Did you get a reply already?

Would be great to know if someone at Betsol is responding to the 
community, and when we see installable packages from them.




Re: New Amanda Community release 3.5.2 Has Arrived!

2022-09-13 Thread Stefan G. Weichinger

Am 13.09.22 um 14:14 schrieb Stefan G. Weichinger:

@Chris ... any news? Could you pls provide packages, for example for 
Debian 11.x ?


thanks


I received:

"Your message to chris.hass...@betsol.com couldn't be delivered.

Chris.Hassell wasn't found at betsol.com."




Re: New Amanda Community release 3.5.2 Has Arrived!

2022-09-13 Thread Stefan G. Weichinger

Am 24.08.22 um 14:10 schrieb Stefan G. Weichinger:

Am 02.08.22 um 20:31 schrieb Nathan Stratton Treadway:

On Tue, Aug 02, 2022 at 12:30:07 -0400, gene heskett wrote:

And where do I get the debian approved versions of 3.5.2?


That's what Jose is attempting to create now... (So watch this thread
for news.)


We are :-)

Tried another build in my test VM today ... still does not work.

And still no packages (for current distributions) from upstream at 
https://www.zmanda.com/downloads/


@Chris ... any news? Could you pls provide packages, for example for 
Debian 11.x ?


thanks


Re: New Amanda Community release 3.5.2 Has Arrived!

2022-08-24 Thread Stefan G. Weichinger

Am 02.08.22 um 20:31 schrieb Nathan Stratton Treadway:

On Tue, Aug 02, 2022 at 12:30:07 -0400, gene heskett wrote:

And where do I get the debian approved versions of 3.5.2?


That's what Jose is attempting to create now... (So watch this thread
for news.)


We are :-)

Tried another build in my test VM today ... still does not work.

And still no packages (for current distributions) from upstream at 
https://www.zmanda.com/downloads/


Re: Multiple storage-definitions, generating archive tapes

2022-08-24 Thread Stefan G. Weichinger

Am 23.08.22 um 23:16 schrieb Exuvo:
I use two separate backup configurations. One for weekly backups and one 
for yearly archival.
The weekly i start from crontab and the archival one i start manually 
with "sudo -u amanda amdump archive".
Each configuration has its own storage definition with tape names 
starting with R or A ( ^A[0-9]{5}$ ) respectively.
They share holding disk but i always flush it so it is only used for 
slow DLEs to avoid the drive starting and stopping a lot.


Is that what you are trying to do or did i read incorrectly?


This is what I *had* before.

What I try now is having both configs in one single amanda.conf.

So I use two storage definition blocks, two tape pools, etc ... all in 
one amanda.conf.


My goal is to avoid reading/compressing/encrypting the whole data twice 
every week or so.


In the specific case I want to avoid a ~26 hour amdump run filling 4 
tapes every weekend: it has to do ALL the dumping of ALL the DLEs again 
that already happened during the week sometimes. And even if I decide to 
let it do fresh FULLs on the weekend, I would prefer to all have it in 
ONE single configuration.


(some parts of) the archive tapes should be generated/prepared while 
doing the normal daily backups.


For 2 sites that looks good already: there I use physical and virtual 
tapes as the 2 storages. Amanda is able to write to both storages in 
parallel so I get tapes in both storages filled in every run (and the 
vtapes collect only the FULLs).


At one site it's more complicated because the 2 storages basically only 
point to separate (groups of) slots in one physical tape changer. So I 
can't write to both storages in parallel: only one tape drive.


So I try to come up with a schedule like:

* run the config with "-o storage=daily" every Mon-Fr do incrementals 
and fulls mixed to match the "daily" policy


* let that storage-config keep (some) fulls in the holding disks: they 
can be flushed by the storage-config "archive" on the weekend


* weekends: "-o storage=archive": let the config clean up the holding 
disk, plus let it do any missing fulls in the weekend runs (DLEs which 
are too big etc)


I am sure amanda is capable of doing that, and I get closer to getting 
it right with every run now.


The docs don't tell us much about the possibilities with that newer 
config syntax, to me it seems that all this had been added before the 
change of ownership and before development of the Community Edition stalled.


Back then Jean-Louis Martineau gave some tips on how to use that (in 
some ml-threads) but I never found some real documentation or examples.


OK, the sections and parameters are documented in the man-pages, but 
there is no HOWTO, afaik.


Re: Multiple storage-definitions, generating archive tapes

2022-08-23 Thread Stefan G. Weichinger

Am 10.08.22 um 08:52 schrieb Stefan G. Weichinger:


What I try:

storage1 should leave the lev0 backups in the holding disk after writing 
to tapes in pool1


storage2 should be allowed to remove them when written to the tapes in 
pool2


Does noone else use multiple storage definitions?

I have it in 3 sites now, various configs.

The main one is the one with the "split tape changer": 4 tapes for 
daily, 4 tapes for archive


The current plan:

* storage1 (daily) uses

runtapes 1
dumpselection ALL ALL

and runs monday to friday

-> do incrementals and fulls in mix, only use 1 tape/day

* storage2 (archive) uses

runtapes 4
dumpselection ALL FULL

and runs on saturday (or sunday)

-> only write FULLs to the tapes, use 4 tapes to get all DLEs onto one 
set of tapes


-

I don't get all fulls into the holding disks so I have to use 
"holdingdisk never" for some DLEs (there are Veeam vbk files on one LVM 
volume, and one holding disk is another LVM volume in the same VG -> no 
sense to copy that anyway).


What I try to come up with:

how to trigger fulls on the weekend?

I plan to use "amadmin archive force *" before starting "amdump archive 
-o storage=archive" on weekends.


OK?

Some fulls could/should be collected in the holdingdisk by running the 
daily backups. This could be achieved by using the right *-threshold 
values, I assume.


All this gets quite complicated quickly, at least for me ;-)

Maybe I overlook something, maybe I don't yet fully understand some 
parts here.


Would be great to discuss this with others, thanks.




Re: Multiple storage-definitions, generating archive tapes

2022-08-10 Thread Stefan G. Weichinger



(resend with correct ML-email)

Am 09.08.22 um 16:50 schrieb Jose M Calhariz:

On Mon, Aug 08, 2022 at 04:34:11PM +0200, Stefan G. Weichinger wrote:

Am 04.08.22 um 10:08 schrieb Stefan G. Weichinger:
(...)

combined with

flush-threshold-dumped  200 # (or more)
flush-threshold-scheduled   200 # (or more)
taperflush  200


My experience is this options do not work as documented.  On my case
there is a autoflush before the holding disk have enough files to fill
a tape.  On a setup that produces less files per amdump than the size
of a physical tape.


I never fully understood and therefore trusted these options ;-)

What I try:

storage1 should leave the lev0 backups in the holding disk after writing 
to tapes in pool1


storage2 should be allowed to remove them when written to the tapes in pool2

Maybe it's too complicated to define 2 policies for the 2 storages, 
that's what I try to find out currently.


I flush stuff to storage2 right now and it seems to work as intended.

Can't confirm your observation though, would have to test that in detail.

thanks!


Re: Multiple storage-definitions, generating archive tapes

2022-08-08 Thread Stefan G. Weichinger

Am 04.08.22 um 10:08 schrieb Stefan G. Weichinger:


1) I would have to "split" a physical tape change into 2 logical changers:

12 tapes for "daily" runs (= storage 1)
12 tapes for "full-only" runs (= storage 2)

(How) would amanda handle that? As there is only one tape drive it would 
have to keep everything in the holding disk and write it 2 times 
sequentially to each tape(set).


I am working on 2 sites using multiple storages. One runs OK already 
using 2 changers in parallel.


The second one only has one tape library.

I define one changer using slots 1-4 (storage1) and the 2nd changer with 
slots 5-8 (storage2).


Both changers use the same and only tape drive ... that's why I need 
sequential processing.


I am unsure how to use that correctly.

My approach so far (not yet fully tested):

# daily cronjob

/usr/sbin/amdump config -ostorage=storage1

combined with

flush-threshold-dumped  200 # (or more)
flush-threshold-scheduled   200 # (or more)
taperflush  200
autoflush yes

this should keep the holding disk files.

Now if I want to write these to storage2: do I run amdump or amflush?

(assuming I use "-o storage=storage2")

That second storage should be allowed to clear the written DLEs from the 
holdingdisk.




Multiple storage-definitions, generating archive tapes

2022-08-04 Thread Stefan G. Weichinger



Now that I understand a bit better how to use 2 storage-definitions I'd 
like to improve a setup at a customer.


Back then I tried to use amvault, but didn't get the concepts right 
(reading the old threads I see that I didn't even get the 
command-options right ...).


What is set up right now:

2 configs "daily" and "weekly", sharing the disklist

daily does incrementals and fulls, as usual, excludes some (huge) DLEs 
we don't want to be dumped on weekdays


weekly does FULLs only, taking >20 hrs of tape time to write 4 LTO6 
tapes each weekend


So the 2 configs each read the full data into separate holding disks etc 
-> redundancy, stress for hardware, etc


The "weekly" tapes get marked as "no-reuse" every few months and each 
year and stored elsewhere. That is important ... long term archive.


-

Now I think of integrating the two configs by using 2 
storage-definitions in one amanda.conf


by doing so it should be possible to collect the FULLs during the week 
somehow ... avoiding the huge run on the weekend etc


Does anyone do it like that?
Any hints or tips?

Things to think of:

1) I would have to "split" a physical tape change into 2 logical changers:

12 tapes for "daily" runs (= storage 1)
12 tapes for "full-only" runs (= storage 2)

(How) would amanda handle that? As there is only one tape drive it would 
have to keep everything in the holding disk and write it 2 times 
sequentially to each tape(set).


(I currently have ~3 TB holding disk, and around 10TB = 4 tapes data 
volume to handle)


2) I'd like to use a minimum of tapes for "weekly": amanda should 
collect FULLs along the way (in the holdingdisk) until a weekly tape 
could be filled completely


Maybe I would have to adjust my amdump-cronjobs: only use storage1 in 
normal runs, use storage2 for another time slot.


I think you all get the picture: I see the possibilities but don't get 
the combination right yet ;-)


Looking forward to any feedback, Stefan


Re: New Amanda Community release 3.5.2 Has Arrived!

2022-08-02 Thread Stefan G. Weichinger

Am 02.08.22 um 10:06 schrieb Stefan G. Weichinger:

Am 02.08.22 um 09:44 schrieb gene heskett:

On 8/2/22 01:40, Winston Sorfleet wrote:

Much appreciated José, count me in as a beta tester.



Me too, but its been a while and this is a fairly fresh bullseye 
install, and autogen

complains mightily, and quickly.

gene@coyote:~/src/amanda-tag-community-3.5.2$ ./autogen
See DEVELOPING for instructions on updating:
  * gettext macros
  * gnulib
  * libtool files
..creating file lists
..aclocal
config/amanda/libs.m4:160: warning: macro 'AM_PATH_GLIB_2_0' not found 
in library

...aclocal patches
..autoconf
configure:46532: error: possibly undefined macro: AM_PATH_GLIB_2_0
   If this token and others are legitimate, please use 
m4_pattern_allow.

   See the Autoconf documentation.
configure:46533: error: possibly undefined macro: AC_MSG_ERROR
autoconf failed

And the fix is?

build-essential., debhejper and friends are installed.


As far as I learned from Chris Hassell, the new packaging scripts should 
build packages for Debian with:


./packaging/deb/buildpkg

I am trying that for the branch "tag-community-3.5.2" right now in a 
Debian-11.4-VM.


There is quite a list of needed packages to install, Chris told me:

gcc g++ binutils gettext libtool autoconf automake bison flex swig 
libglib2.0-dev libjson-glib-1.0 libjson-glib-dev libssl-dev 
libcurl4-openssl-dev libncurses5-dev libreadline-dev perl-base 
perl-modules libcpanplus-perl libio-socket-ssl-perl libswitch-perl 
bsd-mailx mtx procps smbclient dump gnuplot-nox xinetd


and some extra:

xsltproc build-essential debhelper fakeroot dpkg-dev dh-make-perl git 
make gawk grep tar passwd


pressing SEND now, my build is still running


I was able to build packages for 3.5.2 after applying the fix mentioned in

https://github.com/zmanda/amanda/issues/185

Installing the packages over the packages coming from the Debian 
repositories seems a bit tricky:


the new packages use the user "amandabackup", so far it was "backup"

the path to amandad seems to change as well

I assume I can edit packaging/deb/rules to adjust that.

Unsure if that is a good thing to do.


Re: New Amanda Community release 3.5.2 Has Arrived!

2022-08-02 Thread Stefan G. Weichinger

Am 02.08.22 um 10:06 schrieb Stefan G. Weichinger:


pressing SEND now, my build is still running


hitting https://github.com/zmanda/amanda/issues/184


Re: New Amanda Community release 3.5.2 Has Arrived!

2022-08-02 Thread Stefan G. Weichinger

Am 02.08.22 um 09:44 schrieb gene heskett:

On 8/2/22 01:40, Winston Sorfleet wrote:

Much appreciated José, count me in as a beta tester.



Me too, but its been a while and this is a fairly fresh bullseye 
install, and autogen

complains mightily, and quickly.

gene@coyote:~/src/amanda-tag-community-3.5.2$ ./autogen
See DEVELOPING for instructions on updating:
  * gettext macros
  * gnulib
  * libtool files
..creating file lists
..aclocal
config/amanda/libs.m4:160: warning: macro 'AM_PATH_GLIB_2_0' not found 
in library

...aclocal patches
..autoconf
configure:46532: error: possibly undefined macro: AM_PATH_GLIB_2_0
   If this token and others are legitimate, please use 
m4_pattern_allow.

   See the Autoconf documentation.
configure:46533: error: possibly undefined macro: AC_MSG_ERROR
autoconf failed

And the fix is?

build-essential., debhejper and friends are installed.


As far as I learned from Chris Hassell, the new packaging scripts should 
build packages for Debian with:


./packaging/deb/buildpkg

I am trying that for the branch "tag-community-3.5.2" right now in a 
Debian-11.4-VM.


There is quite a list of needed packages to install, Chris told me:

gcc g++ binutils gettext libtool autoconf automake bison flex swig 
libglib2.0-dev libjson-glib-1.0 libjson-glib-dev libssl-dev 
libcurl4-openssl-dev libncurses5-dev libreadline-dev perl-base 
perl-modules libcpanplus-perl libio-socket-ssl-perl libswitch-perl 
bsd-mailx mtx procps smbclient dump gnuplot-nox xinetd


and some extra:

xsltproc build-essential debhelper fakeroot dpkg-dev dh-make-perl git 
make gawk grep tar passwd


pressing SEND now, my build is still running



Re: New Amanda Community release 3.5.2 Has Arrived!

2022-08-02 Thread Stefan G. Weichinger

Am 29.07.22 um 04:11 schrieb Pavan Raj:

Hello Everyone,

We at Zmanda are pleased to announce the 3.5.2 release of Amanda Community.

URL: http://www.amanda.org/download.php 



https://www.zmanda.com/downloads/

lists packages for Debian-9.9 (for example)

That isn't very helpful when Debian is at 11.4 currently.

Please provide usable packages for current distributions, not everyone 
wants to build software from source code.




Re: Is there a write-up on using amvault with an archive config?

2022-08-02 Thread Stefan G. Weichinger

Am 02.08.22 um 08:13 schrieb Stefan G. Weichinger:


So that sounds like two "storage" definitions.

Will try.


Yep, got it.
That works great so far :-)

Now for some tuning, for example only writing FULLs to one storage etc

That might lead to some reconfiguring in some of my setups ...



Re: Is there a write-up on using amvault with an archive config?

2022-08-02 Thread Stefan G. Weichinger

Am 22.03.22 um 05:34 schrieb Winston Sorfleet:
I use amvault to tertiarary media (LTO-2, I am just a casual home user) 
while my main archive is VTL on a slow NAS.  I use cron, but obviously I 
could just run as-needed from the command line.


You're right, it is a bit hard to intuit, and I had to get some help 
from the community here as it is using overrides.


The command line I use is as follows:

/usr/sbin/amvault -q --latest-fulls --dest-storage "tape_storage" vtl


Thanks, Winston,

I got vaulting up and running now, in my case with a physical and a 
virtual changer. Reading from the physical tapes and writing to the 
virtual tapes now.


But more questions arise ;-)

Maybe amvault isn't exactly what I need: I would prefer to write to both 
storages from the same holding disk files. That would avoid stressing 
the physical tapes so much.


Would that work with amvault or would it need a second "storage" 
definition instead of a "vault-storage" definition?


Would it work with clever thresholds? I assume no: the threshold values 
don't get changed after writing to the first changer, files would just 
stay in the holding disk (until removed next time?).


Found this old thread:

https://www.backupcentral.com/forum/14/290509/approaches_to_amanda_vaulting_

quote:

"Amanda 3.5 can do everything you want only by running the amdump command.

Using an holding disk:

* You configure two storages
* All dumps go to the holding disk
* All dumps are copied to each storages, not necessarily at the same
time or in the same run.
* The dumps stay in holding until they are copied to both storages"

So that sounds like two "storage" definitions.

Will try.

Hints welcome, as always.



Re: Is there a write-up on using amvault with an archive config?

2022-08-01 Thread Stefan G. Weichinger

Am 23.03.22 um 07:19 schrieb Jon LaBadie:

Thanks Winston, this moves me a bit closer.


Jon, are you successful with amvault now?

I have to try to set that up again, I want to "amvault" an existing 
setup with a LT0-4-changer to additional virtual tapes.


Back then I didn't get it right, now I want to attack that one again.


Re: New Amanda Community release 3.5.2 Has Arrived!

2022-08-01 Thread Stefan G. Weichinger

Am 29.07.22 um 04:11 schrieb Pavan Raj:

Hello Everyone,

We at Zmanda are pleased to announce the 3.5.2 release of Amanda
Community.


Nice to hear ...

anyone touched that yet?

Looking forward to that release packaged for Debian (most of my 
amanda-servers run Debian Linux).


On gentoo I am afraid I have to try to come up with an ebuild myself for 
now (even the 3.6 beta doesn't come with an ebuild yet).






Re: using Google Cloud for virtual tapes

2022-07-29 Thread Stefan G. Weichinger

Am 27.06.22 um 10:21 schrieb Stefan G. Weichinger:

Am 03.06.22 um 09:13 schrieb Stefan G. Weichinger:

I now at last received credentials to that gcs storage bucket, so I 
can start to try ...


Does it make sense to somehow follow 
https://wiki.zmanda.com/index.php/How_To:Backup_to_Amazon_S3 ?


I don't find anything mentioning Google Cloud in the Wiki.


*bump*


Considering to setup vtapes and rclone them to GCP ...

Does anyone do something similar?

This would need some housekeeping: remove the rcloned vtapes after X 
runs etc.




Re: using Google Cloud for virtual tapes

2022-06-27 Thread Stefan G. Weichinger

Am 03.06.22 um 09:13 schrieb Stefan G. Weichinger:

I now at last received credentials to that gcs storage bucket, so I can 
start to try ...


Does it make sense to somehow follow 
https://wiki.zmanda.com/index.php/How_To:Backup_to_Amazon_S3 ?


I don't find anything mentioning Google Cloud in the Wiki.


*bump*




Re: using Google Cloud for virtual tapes

2022-06-03 Thread Stefan G. Weichinger



Am 05.04.22 um 11:55 schrieb Stefan G. Weichinger:

Am 04.04.22 um 23:33 schrieb Chris Hassell:
Google cloud works and works well enough indeed.   All of them work... 
but a non-block technique is capped at 5TB by almost all providers, 
and getting to there is complicated and can DOUBLE your storage cost 
over one month [only] if you cannot make temporaries without being 
charged.  (looking at you, Wasabi!!).


However either A or B or C for Google specifically...
A) the desired block size must be kept automatically small (it varies 
but ~40MB or smaller buffer or so for a 4GB system) ... and each DLE 
"tape" must be limited in size
B) the biggest block size can be used to store 5TB objects [max == 
512MB] but the curl buffers will take ~2.5GB and must be hardcoded 
[currently] in the build.  Its too much for many systems.
C) the biggest block size can be used but google cannot FAIL EVEN ONCE 
... or the cloud upload cannot be restarted and the DLE fails 
basically.   This doesn't succeed often on the way to 5TB.
D) the biggest block size can be used but Multi-Part must be turned 
off and a second DLE and later gets to be very very very slow to add on


Option D has NO limits to backups, but it is what needs the O(log N) 
check for single-stored blocks.


This currently does a O(N) check against earlier blocks to check the 
cloud storage total after file #1 every time.   Verrry slow at only 
1000 per xaction.


@Chris, thanks for these infos.

Sounds a bit complicated, I have to see how I can start there.

I won't have very large DLEs anyway, that might help.

I have ~700GB per tape right now, that's not very much. Although 
bandwidth (=backup time window) also will be an issue here.


I now at last received credentials to that gcs storage bucket, so I can 
start to try ...


Does it make sense to somehow follow 
https://wiki.zmanda.com/index.php/How_To:Backup_to_Amazon_S3 ?


I don't find anything mentioning Google Cloud in the Wiki.


Re: "overdue" wrong

2022-06-03 Thread Stefan G. Weichinger



Am 02.06.22 um 14:02 schrieb Nathan Stratton Treadway:


The amadmin info shows that this dump has no size (in addition to the
"zero" date), so somehow the amanda history is not recording a
successful dump  Can you make sure sure that the dump was in fact
written successfully all the way to tape?


I think this is caused by the "[missing size line from sendbackup]" 
issue with amsamba, these DLEs all use amsamba.


I will try once more to use the amsamba script from upcoming 
amanda-3.6.0 (although last time I did it didn't fix that ...).


"overdue" wrong

2022-06-02 Thread Stefan G. Weichinger




I see wrong "overdue" information in

$ amadmin vtape due

[..]
Overdue 19140 days: server:dle007
[..]


$ amadmin vtape info

Current info for server dle007:


  Stats: dump rates (kps), Full:  64128.6, 62520.7, 64225.8

Incremental:   10.0,   1.1,   2.0

  compressed size, Full: -100.0%,-100.0%,-100.0%

Incremental: -100.0%,-100.0%,-100.0%

  Dumps: lev datestmp  tape file   origK   compK secs

  0  19700101  vtape-007-1  14 -1 -1 -1


The DLEs are dumped OK, though. How to fix that? Remove some index file 
or so?


As you notice from my various postings lately I am trying to clean up my 
amanda-configs and get rid of warnings/errors and other uncertainties ...


Re: slow USB vtapes

2022-06-01 Thread Stefan G. Weichinger

Am 01.06.22 um 12:04 schrieb Jose M Calhariz:


How many DLEs and how many run in parallel (inparallel parameter)?


inparallel 4

basically 3 linux hosts involved, 20 DLEs, some of them via amsamba

I even have to encrypt things ...

Maybe I should decrease inparallel, yes

And only flush after the DLEs are on disk.


How fast is your holdingdisk?


rather slow. 4x 5400 rpm drives in a HW RAID1 -> /dev/sda


What "iostat -k -x 2" command gives when amanda is running?


*thanks* for that pointer

It tells that /dev/sda is 100% utilized.

So that seems to be the bottleneck.

Especially as there is a rsnapshot-cronjob also on that box. This is 
quite heavy for that CPU and the storage.


Right now as I disabled rsnapshot, I was able to flush a 200GB DLE 
successfully .. and in acceptable time:


43 minutes

82753.8 KB/s

Better that multiple hours each night. Much better.

-

changes today:

* changed blocksize from 4m to 1m .. maybe I can go back here

* downgraded to 3.5.1 (just to minimize the moving parts)

* made sure the holding disk is mounted with "noatime"

changes planned:

* bigger and faster disk(s) for the holding disk

* split off rsnapshot-ting: either move that to another server, or at 
least make sure it never runs in parallel to amdumps etc


I test an rsnapshot run now and also watch iostat ... same bottleneck: 
/dev/sda


slow USB vtapes

2022-05-31 Thread Stefan G. Weichinger




I see low write performance to (modern) USB disks attached via USB 3.0

This leads to lng backup windows and even DLEs somehow not written 
because things get stuck (?) somehow.


Until yesterday the disks were attached to USB 2.0 ports, I expected to 
solve this by adding a PCIe card with USB 3.0 ports.


Maybe the mount options aren't optimal:

UUID=3de7b456-019e-40ed-a1b9-aad41d371f26 /mnt/externaldisk7 ext4 
relatime,noauto,user 0 2


?

When I test things by doing something like:

sync; dd if=/dev/zero of=tempfile bs=1M count=1024; sync

I get values of maybe 130 MB/s or so, which is OK imo.

The old internal RAID delivers ~150MB/s (reading from the holding disk).

But with amanda (3.6.0pre2, btw) the flushes and dumps are slow.

Does changing some blocksize help with vtapes?

Sure, old hardware, I know.


Re: amrecover usage with chg-robot

2022-05-30 Thread Stefan G. Weichinger

Am 27.05.22 um 17:30 schrieb Nathan Stratton Treadway:


Ah, so euxvo_crypt is run by the amidxtaped process rather than by the
amrecover process itself.

What does strace show amrecover is doing during this period?

And "ps -ef" shows that the openssl process is still alive (i.e. not
defunct).  What does "strace" show on that process.  If you manually
kill it, does the change of processes up through amidxtaped unwind and
amrecover resume normal processing?



Took me a while to get to do the tests.

Here an strace of amrecover at the end of the first tape:


poll([{fd=4, events=POLLIN}, {fd=5, events=POLLIN}, {fd=11, 
events=POLLIN}, {fd=13, events=POLLIN}, {fd=15, events=POLLIN}], 5, -1) 
= 2 ([{fd=4, revents=POLLIN}, {fd=5, revents=POLLIN}])

read(4, "\2\0\0\0\0\0\0\0", 16) = 8
write(4, "\1\0\0\0\0\0\0\0", 8) = 8
read(5, "\0\0@\0\0\7\241\36", 8)= 8
read(5, 
"\204\34\313\360z\247!\324\306h\272\364\370_c\353\336$\262\224\231 
:\1\247\273\330\316\25\235c\2"..., 16384) = 16384
write(8, 
"\204\34\313\360z\247!\324\306h\272\364\370_c\353\336$\262\224\231 
:\1\247\273\330\316\25\235c\2"..., 16384) = 4096

write(4, "\1\0\0\0\0\0\0\0", 8) = 8
write(4, "\1\0\0\0\0\0\0\0", 8) = 8
write(4, "\1\0\0\0\0\0\0\0", 8) = 8
poll([{fd=4, events=POLLIN}, {fd=8, events=POLLOUT}, {fd=11, 
events=POLLIN}, {fd=13, events=POLLIN}, {fd=15, events=POLLIN}], 5, -1) 
= 1 ([{fd=4, revents=POLLIN}])

read(4, "\4\0\0\0\0\0\0\0", 16) = 8
poll([{fd=4, events=POLLIN}, {fd=8, events=POLLOUT}, {fd=11, 
events=POLLIN}, {fd=13, events=POLLIN}, {fd=15, events=POLLIN}], 5, -1) 
= 1 ([{fd=15, revents=POLLIN}])

write(4, "\1\0\0\0\0\0\0\0", 8) = 8
read(15, "\ngzip: stdin: decompression OK, "..., 2046) = 57
write(2, "\r", 1)   = 1
write(2, "/bin/gzip: \n", 12)   = 12
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=151148, 
si_uid=0, si_status=2, si_utime=21750, si_stime=1658} ---

write(2, "/bin/gzip: gzip: stdin: decompre"..., 67) = 67
write(4, "\1\0\0\0\0\0\0\0", 8) = 8
poll([{fd=4, events=POLLIN}, {fd=8, events=POLLOUT}, {fd=11, 
events=POLLIN}, {fd=13, events=POLLIN}, {fd=15, events=POLLIN}], 5, -1) 
= 2 ([{fd=4, revents=POLLIN}, {fd=15, revents=POLLHUP}])

read(4, "\2\0\0\0\0\0\0\0", 16) = 8
write(4, "\1\0\0\0\0\0\0\0", 8) = 8
read(15, "", 1989)  = 0
close(15)   = 0
write(4, "\1\0\0\0\0\0\0\0", 8) = 8
write(4, "\1\0\0\0\0\0\0\0", 8) = 8
poll([{fd=4, events=POLLIN}, {fd=8, events=POLLOUT}, {fd=11, 
events=POLLIN}, {fd=13, events=POLLIN}], 4, -1) = 1 ([{fd=4, 
revents=POLLIN}])

read(4, "\3\0\0\0\0\0\0\0", 16) = 8
poll([{fd=4, events=POLLIN}, {fd=8, events=POLLOUT}, {fd=11, 
events=POLLIN}, {fd=13, events=POLLIN}], 4, -1) = 2 ([{fd=11, 
revents=POLLHUP}, {fd=13, revents=POLLHUP}])

write(4, "\1\0\0\0\0\0\0\0", 8) = 8
read(11, "", 1830)  = 0
close(11)   = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=151150, 
si_uid=0, si_status=0, si_utime=351, si_stime=884} ---

write(4, "\1\0\0\0\0\0\0\0", 8) = 8
write(4, "\1\0\0\0\0\0\0\0", 8) = 8
read(13, "", 2046)  = 0
close(13)   = 0
write(4, "\1\0\0\0\0\0\0\0", 8) = 8
write(4, "\1\0\0\0\0\0\0\0", 8) = 8
write(4, "\1\0\0\0\0\0\0\0", 8) = 8
poll([{fd=4, events=POLLIN}, {fd=8, events=POLLOUT}], 2, -1) = 1 
([{fd=4, revents=POLLIN}])




The PID 151150 was tar:

151130 pts/2Sl+0:02  094 119297 8976  0.0 amrecover abt 
-o auth=local -s localhost
 151133 pts/2S+ 0:00  029 19394  6552  0.0 
/usr/libexec/amanda/amandad -auth=local
 151134 pts/2S+ 0:00  036 20747  7864  0.0 
/usr/libexec/amanda/amindexd amandad local
 151135 pts/2Z+ 0:00  0 0 0 0  0.0 [amandad] 

 151138 pts/2R+ 0:00  029 19554  6504  0.0 
/usr/libexec/amanda/amandad -auth=local
 151139 pts/2Sl+0:01  0 3 233592 42820  0.2 
/usr/bin/perl /usr/libexec/amanda/amidxtaped amandad l
 151140 pts/2Z+ 0:00  0 0 0 0  0.0 [amandad] 

 151144 pts/4S+ 0:00  062  9685   940  0.0 tail -f 
amidxtaped.20220530124548.debug
 151146 pts/2S+ 0:00  0   812  5495  2984  0.0 /bin/bash 
/usr/sbin/exuvo_crypt -d

 151148 pts/2S+ 0:08  069  5382  1464  0.0 /bin/gzip -dc
 151150 pts/2S+ 0:00  0   385 10046  2572  0.0 tar 
--ignore-zeros --numeric-owner -xpGvf - ./etc
 151153 pts/2R+ 0:06  0   514  8301  4500  0.0 
/usr/bin/openssl enc -pbkdf2 -d -aes-256-ctr -salt -pas



-


strace openssl shows only:

# strace -p 151133
strace: Process 151133 attached
restart_syscall(<... resuming interrupted read ...>

Killing that logs in amidxtaped.xx.debug:


Mon May 30 12:55:26.730917575 2022: pid 151139: thd-0x563663659c00: 
amidxtaped: 

Re: amrecover usage with chg-robot

2022-05-27 Thread Stefan G. Weichinger




forgot pstree:

-tmux: server-+-bash---amrecover-+-amandad-+-amandad

   |  | `-amindexd

   |  |-amandad-+-amandad

   |  | 
`-amidxtaped-+-exuvo_crypt---openssl


   |  | 
`-2*[{amidxtaped}]


   |  |-gzip

   |  |-tar

   |  `-{amrecover}


Re: amrecover usage with chg-robot

2022-05-27 Thread Stefan G. Weichinger

Am 27.05.22 um 14:28 schrieb Stefan G. Weichinger:


Am 27.05.22 um 14:05 schrieb Nathan Stratton Treadway:

On Fri, May 27, 2022 at 10:28:09 +0200, Stefan G. Weichinger wrote:

After that both tar and gzip.binary are shown as  in ps,
whatever that means.


Okay, that's a little progress in the investigation.

"" means that the process has exited, but the return code from
the process has not been read by the parent process yet.  So in this
case, whatever process spawned the tar and gzip subprocesses is not
"noticing" when the subprocesses finish... the question is why (and what
is it stuck doing instead of cleaning up)?

Are the "openssl enc" and/or encription-wrapper-script processes still
out there at this time (and what state are they in)?

You should be able to use pstree or "ps -ef" to determine which process
is the parent (PPID column) of the defunct subprocesses.


Yes, that parent process was still visible.

I will retry that later, currently I am updating stuff on that server etc


The parent is the amrecover process:


root 2154598 2096066 15 15:25 pts/200:01:26 amrecover abt -o 
auth=local -s localhost


amanda   2154601 2154598  0 15:25 pts/200:00:00 
/usr/libexec/amanda/amandad -auth=local


amanda   2154604 2154601  0 15:25 pts/200:00:00 
/usr/libexec/amanda/amindexd amandad local


amanda   2154605 2154601  0 15:25 pts/200:00:00 [amandad] 

amanda   2154632 2154598  6 15:25 pts/200:00:35 
/usr/libexec/amanda/amandad -auth=local


amanda   2154633 2154632  6 15:26 pts/200:00:32 /usr/bin/perl 
/usr/libexec/amanda/amidxtaped amandad local


amanda   2154634 2154632  0 15:26 pts/200:00:00 [amandad] 

amanda   2154701 2154633  0 15:26 pts/200:00:00 /bin/bash 
/usr/sbin/exuvo_crypt -d


root 2154703 2154598 49 15:26 pts/200:04:11 [gzip] 

root 2154705 2154598  2 15:26 pts/200:00:13 [tar] 

amanda   2154708 2154701 41 15:26 pts/200:03:31 /usr/bin/openssl enc 
-pbkdf2 -d -aes-256-ctr -salt -pass fd:3



This is while this is logged:

# tail -f amidxtaped.20220527152600.debug

Fri May 27 15:33:38.443013672 2022: pid 2154633: thd-0x556460b31400: 
amidxtaped: 
/usr/lib64/perl5/vendor_perl/5.34/Amanda/Restore.pm:1719:info:490 
12472320 kb


Fri May 27 15:33:53.454973643 2022: pid 2154633: thd-0x556460b31400: 
amidxtaped: 
/usr/lib64/perl5/vendor_perl/5.34/Amanda/Restore.pm:1719:info:490 
12472320 kb


Fri May 27 15:34:08.466959610 2022: pid 2154633: thd-0x556460b31400: 
amidxtaped: 
/usr/lib64/perl5/vendor_perl/5.34/Amanda/Restore.pm:1719:info:490 
12472320 kb


PID 2154633 -> /usr/bin/perl /usr/libexec/amanda/amidxtaped amandad local


Re: amrecover usage with chg-robot

2022-05-27 Thread Stefan G. Weichinger



Am 27.05.22 um 14:05 schrieb Nathan Stratton Treadway:

On Fri, May 27, 2022 at 10:28:09 +0200, Stefan G. Weichinger wrote:

After that both tar and gzip.binary are shown as  in ps,
whatever that means.


Okay, that's a little progress in the investigation.

"" means that the process has exited, but the return code from
the process has not been read by the parent process yet.  So in this
case, whatever process spawned the tar and gzip subprocesses is not
"noticing" when the subprocesses finish... the question is why (and what
is it stuck doing instead of cleaning up)?

Are the "openssl enc" and/or encription-wrapper-script processes still
out there at this time (and what state are they in)?

You should be able to use pstree or "ps -ef" to determine which process
is the parent (PPID column) of the defunct subprocesses.


Yes, that parent process was still visible.

I will retry that later, currently I am updating stuff on that server etc

to add complexity: this is a rather old server with gentoo linux on it, 
a bit neglected because it should have been replaced long ago)


Re: amrecover usage with chg-robot

2022-05-27 Thread Stefan G. Weichinger



Am 26.05.22 um 12:05 schrieb Stefan G. Weichinger:

Am 26.05.22 um 12:01 schrieb Stefan G. Weichinger:

Am 26.05.22 um 04:10 schrieb Exuvo:

I think i had "/bin/gzip: gzip: stdin: decompression OK, trailing 
garbage ignored" before i used -q to zstd which suppresses warnings 
and interactivity.


Maybe I could edit to "gzip -d" somewhere in the code to also suppress 
this?


sorry, correction:

"gzip -d -q"


Trying this by:

* moved gzip to gzip.binary
* copied gunzip to gzip
* edited gzip to call "gzip.binary -q"

and rerunning my amrecover test with 2 tapes and device "robot"

First tape is extracted OK

That ugly "trailing garbage" message is gone now, as intended.

The strace on tar shows that it exits with exit code 0.

After that both tar and gzip.binary are shown as  in ps, 
whatever that means.


And amidxtaped.xx.debug shows the same

Fri May 27 10:26:40.042894830 2022: pid 737966: thd-0x563359eb2800: 
amidxtaped: 
/usr/lib64/perl5/vendor_perl/5.32/Amanda/Restore.pm:1719:info:490 
12472320 kb


as before. Maybe I have to wait ...


Re: amrecover usage with chg-robot

2022-05-26 Thread Stefan G. Weichinger

Am 26.05.22 um 12:01 schrieb Stefan G. Weichinger:

Am 26.05.22 um 04:10 schrieb Exuvo:

I think i had "/bin/gzip: gzip: stdin: decompression OK, trailing 
garbage ignored" before i used -q to zstd which suppresses warnings 
and interactivity.


Maybe I could edit to "gzip -d" somewhere in the code to also suppress 
this?


sorry, correction:

"gzip -d -q"


Re: amrecover usage with chg-robot

2022-05-26 Thread Stefan G. Weichinger

Am 26.05.22 um 04:10 schrieb Exuvo:

I think i had "/bin/gzip: gzip: stdin: decompression OK, trailing 
garbage ignored" before i used -q to zstd which suppresses warnings and 
interactivity.


Maybe I could edit to "gzip -d" somewhere in the code to also suppress this?




Re: amrecover usage with chg-robot

2022-05-26 Thread Stefan G. Weichinger

Am 26.05.22 um 04:10 schrieb Exuvo:

 From my notes when i was doing recovery testing:
  For some reason if i use encryption i get a single spurious error at 
the end of amcheckdump or amrecover if you recover the last file written:

   application stderr: /usr/bin/tar: Skipping to next header
   application stderr: /usr/bin/tar: Exiting with failure status due to 
previous errors
  It still recovers as it should so nothing to worry about and manual 
recovery from the same tape does not produce this warning but i have not 
yet figured out why it happens.


I did do a complete 8TB restore when i first set up my archive backups, 
which takes 7 tapes for me, changer worked, and that did give the above 
warning at the end but all files matched the source when i checked.


I think i had "/bin/gzip: gzip: stdin: decompression OK, trailing 
garbage ignored" before i used -q to zstd which suppresses warnings and 
interactivity.


You could try a manual restore:
position at block 1 with mt fsb
mbuffer -i $TAPE -s 1024k -b 2048 -L | /etc/amanda/encrypt -d | 
/etc/amanda/zstd-compression -d | tar -xf -
or mbuffer -i $TAPE -s 1024k -b 2048 -L | openssl enc -pbkdf2 -d 
-aes-256-ctr -salt -pass fd:3 3< pwfile | zstd -dqcf | tar -xf -
That never gave the warnings as i get with amrecover but only feasible 
for full restores.
You can try zstd -dfo /dev/null to see if zstd gives any warnings while 
reading.


Thanks a lot for that detailed reply. Although in my case the 
compression is with gzip, not with zstd. So I am unsure if your 
suggestion applies.


A short search tells me that zstd is able to decompress gzip archives?

Nice, but the existing backups on tape tell amrecover to use "gzip -d" 
or so, right?


I see people setting up a symlink .. hmm, wondering.

In my "define changer" section i have device-property "READ_BUFFER_SIZE" 
"1024k", which i had to add to read older backups when i was testing 
different block sizes.

That settings means it supports up to that blocksize for restores.


I don't have that setting yet.

Set it to 2048k now to match

blocksize 2 mbytes

readblocksize 2 mbytes

in the used tapetype (maybe all 3 settings could go into either the 
tapetype OR the changer device-property lines).


Re: amrecover usage with chg-robot

2022-05-26 Thread Stefan G. Weichinger

Am 26.05.22 um 03:48 schrieb Exuvo:
To handle changing device names use ex 
/dev/tape/by-id/scsi-HUE12340E22-nst and /dev/tape/by-id/scsi-DEC1234567


Nice hack, thanks. Applied.



Re: amrecover usage with chg-robot

2022-05-25 Thread Stefan G. Weichinger



Am 25.05.22 um 16:46 schrieb Stefan G. Weichinger:

The tested DLE is encrypted and compressed. Yes, redundant ... I can 
change that in the future.


The first part gets read perfectly, but it seems that the end of the 
tarball crashes things somehow.


I remember that error from mailing list threads .. I will research later 
when I am back at my office.


Testing with a DLE that is/was not encrypted. Plain tar, no compression.

amrecover with device "/dev/sg21" works great. I have to confirm the 
tape change after the first files have been extracted, then the changer 
loads the next tape and the next restore step happens. Great.


-

So to me it looks that my dumptype with both compression and encryption 
is the problem.


I use the script provided by Anton "exuvo" Olsson, he shared it in 
earlier threads here.


The current iteration on this server:

https://dpaste.org/2YrkJ

Maybe it hasn't yet been tested with amrecover from multiple tapes?

Or the combination with gzip is a problem.

I'll be happy to share any more debug logs or so.


Re: amrecover usage with chg-robot

2022-05-25 Thread Stefan G. Weichinger

Am 25.05.22 um 16:12 schrieb Nathan Stratton Treadway:


When doing an extract tar does read on to the end of the tar file before
exiting, but "hours and hours" seems like a long time to wait for
that...

Is tar still running (e.g. what does "top" or "ps" show)?  If so, what
does strace on the tar process show?  Do any other amanda (sub)processes
exist on the system at this time?


I reran that now and have to add some more information:

That size value increases up to the mentioned 12472320 kb ... and until 
then tar is running.


Then the tar file seems to end, and amrecover says:

/bin/gzip: 
/bin/gzip: 
gzip: stdin: decompression OK, trailing garbage ignored


tar has stopped then,also no more gzip process visible.

The tested DLE is encrypted and compressed. Yes, redundant ... I can 
change that in the future.


The first part gets read perfectly, but it seems that the end of the 
tarball crashes things somehow.


I remember that error from mailing list threads .. I will research later 
when I am back at my office.


Re: amrecover usage with chg-robot

2022-05-25 Thread Stefan G. Weichinger

Am 25.05.22 um 15:35 schrieb Nuno Dias:

  That messages seems ok, but usually the "kb" will increase with
the time, the same "kb" everytime seems a little strange,
unfortunately I can't help more than this! I think you need
someone that understand more of amanda.


Never mind, thanks for providing your config etc

I also expect that value to increase ... maybe someone comes up with 
some more details on all this.


For now I stop my amrecover-test-run. It should have succeeded hours ago.


Re: amrecover usage with chg-robot

2022-05-25 Thread Stefan G. Weichinger



Am 25.05.22 um 14:41 schrieb Nuno Dias:

My conf  (the relevant parts)

tpchanger "robot"

define changer robot {
 tpchanger "chg-robot:/dev/changer"
 changerfile "/etc/amanda//changer-state"
 property"tape-device" "0=tape:/dev/nst0"
 property "eject-before-unload" "no"
 property "use-slots" "1-100"
 property "load-poll" "0s poll 3s until 120s"
}

amrecover_changer "/dev/changer"

and /dev/changer is a sym link to /dev/sg11

  So let me explain what is /dev/sg11 ... this is the device that
allows the manipulation and see the tapes and drives of the
robot, for example can be used with the command mtx

$ mtx -f /dev/sg11 status


Sure, thanks, I know what /dev/sg11 is ;-)  in my case /dev/sg21

(although that seems to change with reboots sometimes .. another topic)

The difference between your and my conf is only that symlink.

I directly have

tpchanger "chg-robot:/dev/sg21"

and

amrecover_changer "/dev/sg21"

-

Currently I have another amrecover running. It restored from tape1 .. 
and now I only see these lines in the current debug file 
"amidxtaped.20220525123652.debug":


Wed May 25 14:48:25.078884308 2022: pid 705002: thd-0x556f690aca00: 
amidxtaped: 
/usr/lib64/perl5/vendor_perl/5.32/Amanda/Restore.pm:1719:info:490 
12472320 kb
Wed May 25 14:48:40.090891256 2022: pid 705002: thd-0x556f690aca00: 
amidxtaped: 
/usr/lib64/perl5/vendor_perl/5.32/Amanda/Restore.pm:1719:info:490 
12472320 kb
Wed May 25 14:48:55.102880465 2022: pid 705002: thd-0x556f690aca00: 
amidxtaped: 
/usr/lib64/perl5/vendor_perl/5.32/Amanda/Restore.pm:1719:info:490 
12472320 kb


... for hours now.

Is that OK? Maybe tar still "scans" through that first tarball on tape ... ?


Re: amrecover usage with chg-robot

2022-05-25 Thread Stefan G. Weichinger

Am 25.05.22 um 12:17 schrieb Nuno Dias:


I have "tapedev" in /etc/amanda/amanda-client.conf  but is
commented,  maybe was used a long time ago, but right now is not
necessary with my configuration.

  And yes I do not choose any device or changer when I execute
amrecover.


OK, great.

My amrecover still wants to use /dev/nst0, even after setting 
"amrecover_changer"


I can edit it, no problem, but I try to configure it as good as possible 
... and write a decent howto.


Setting "robot" as device in amrecover doesn't work.

What do you have in "tpchanger" ?

pls share the relevant blocks










amrecover usage with chg-robot

2022-05-25 Thread Stefan G. Weichinger




At a customer I decided to do some recovery testing.

We use the chg-robot changer there, with one tape drive.

See below:

* tapetype
* changer
* taperscan
* storage
* amrecover_changer

(I could also show the whole config, but don't want to flood the list ...)

Amdumps work fine.
Recovery ... I have questions:

I decided to restore a DLE with a lev1 on one tape and the lev0 on 
another tape, to test the whole handling of 2 or more tapes.


That lead to amrecover reading the lev1 tarball from tape 1, and then 
hours of waiting ... it never got to the point of requesting tape 2 or 
changing it on its own.


I read the manpages etc, and I am unsure if to use "robot" (= changer 
name) as tape device in amrecover, or "tape:/dev/nst0" directly.


tested both, no difference

Maybe the tape drive doesn't return some of those EOM/EOT messages?

It's a HP Ultrium 4-SCSI, Rev: B12H ... as far as I remember that worked 
before.


Thanks for any hints here.

(ah, for reference: Amanda-3.5.1 on Debian 11.3 right now)

---

here the configs:


define tapetype LTO-4 { 
│
comment "Created by amtapetype; compression disabled; 
2017-10-31 sgw"│
length 698510208 kbytes 
 │
filemark 0 kbytes 
 │
speed 36696 kps 
 │
blocksize 2 mbytes 
 │
readblocksize 2 mbytes 
 │
} 
│


 │
define changer robot { 
│
tpchanger "chg-robot:/dev/sg21" # lsscsi -g 
 │
property "tape-device" "0=tape:/dev/nst0" 
 │
property "eject-before-unload" "no" 
 │
property "use-slots" "1-24" 
 │
changerfile "/etc/amanda/abt/chg-robot-dev-sg21" 
  │
} 
│


 │
define taperscan lexi { 
│
comment "none" 
 │
plugin "lexical" 
 │
} 
│


 │
define storage abt { 
│
tapepool "abt" 
 │
tapetype "LTO-4" 
 │
tpchanger "robot" 
 │
runtapes 1 
 │
taperscan "lexi" 
 │


 │
flush-threshold-dumped  300 
 │
flush-threshold-scheduled 300 
 │
taperflush 300 
 │
autoflush  yes 
 │


 │
#labelstr "^ABT-[0-9][0-9]*$" 
 │
autolabel "ABT-%b" 
 │
} 
│


 │
reserve 30 
│


 │
amrecover_changer "robot"


Re: Request for interested reviewers ...

2022-05-17 Thread Stefan G. Weichinger



Am 16.05.22 um 10:01 schrieb Stefan G. Weichinger:

Am 13.05.22 um 18:12 schrieb Chris Hassell:
ANNOUNCEMENT:  I'm going to try to push Betsol's attempt at a 3.6 
"community build" into a 3_6 branch, so it can be tested as a whole.  
We will serve binaries built for many distros as we can get them 
together.   There's enough changes to merit a reasonably new 
sub-version, if not maybe the 4.0 we would like.


My version of amsamba upgrade is in:  3_5-out-amsamba-only


Thanks for sharing.

Tested your new amsamba-script right now, still that "missing size line 
from sendbackup".


In general: is the dump OK then anyway, or not?

Or: can we ignore that message for now or not?

As far as I test recovery the dumps seem OK, but I would prefer a 
definitive statement here ...


I assume the newer amsamba-script should fix that as soon as it works 
with recent samba-versions.


Re: Request for interested reviewers ...

2022-05-16 Thread Stefan G. Weichinger

Am 13.05.22 um 18:12 schrieb Chris Hassell:

ANNOUNCEMENT:  I'm going to try to push Betsol's attempt at a 3.6 "community 
build" into a 3_6 branch, so it can be tested as a whole.  We will serve binaries 
built for many distros as we can get them together.   There's enough changes to merit a 
reasonably new sub-version, if not maybe the 4.0 we would like.

My version of amsamba upgrade is in:  3_5-out-amsamba-only


Thanks for sharing.

Tested your new amsamba-script right now, still that "missing size line 
from sendbackup".


I only substituted that one script on my debian 11.3 system and edited 
some paths to make it work (@amdatadir@, for example).




Re: Request for interested reviewers ...

2022-05-11 Thread Stefan G. Weichinger

Am 02.12.20 um 01:17 schrieb Chris Hassell:

Greetings,

It’s been a long spring and summer.   At BETSOL we’re (sigh) very nearly 
done with our new Django-based UI for Zmanda 4.0 based on Amanda 
3.5.1.   Along with that, we have several updates that need review and 
we’d like to get in to the community version we support.  Hopefully 
enough to call it 3.6?


[..]


-Updates to Amsamba.pl

oBig work simply to update, clean up, and eliminate troublesome warnings.

oUse of “-A” authorization ‘file’ to communicate to smbclient (as 
PASSWD_FD caused problems often)


oConsistent use of open3 for checks / backups / restores


I'd be happy to test new amsamba, as mentioned in another thread and on 
github.


Re: amsamba breaks with samba-4.14?

2022-05-06 Thread Stefan G. Weichinger

Am 24.08.21 um 17:38 schrieb Chris Hassell:

@Tobias:

We have a new version of the amsamba app that you can try.   We need
to get it out to get reviewed but it's nearly all changed.   We were
hoping to get a community build out ... but that keeps getting
delayed.

It's been refined and corrected in several ways.  We have seen
various versions of samba behave differently.  I  cannot say I saw
this below, though.


@Chris, could you pls share the updated amsamba-script asap?

My tests suggest that the issue with "[missing size line from 
sendbackup]" seems to come from amsamba, not from the encryption part.


Samba-4.15.7 on Debian 11.3 here.






Re: amcrypt: deprecated key derivation used

2022-05-06 Thread Stefan G. Weichinger

Am 06.05.22 um 12:34 schrieb Exuvo:

Sorry for a lot of replies.


never mind, I am happy to get help and some communication going

Added your suggestion with:

echo "$@" > /tmp/encryptparams

now

I looked at my config and i only use estimate calcsize and estimate 
server as estimate client was so slow when it was using that.
I probably never tested my encryption script with estimate client which 
i think is the default.


In my case it's using "estimate server" already.

Ah, your next reply comes in right now ;-)

(this might work better in some forum or chat. Or even in the github issue?)





Re: amanda fails

2022-05-06 Thread Stefan G. Weichinger

Am 18.10.21 um 14:38 schrieb Stefan G. Weichinger:


Anyone seeing this as well?


FAILURE DUMP SUMMARY:
   taper: FATAL Can't use an undefined value as an ARRAY reference at 
/usr/lib/x86_64-linux-gnu/amanda/perl/Amanda/ScanInventory.pm line 343.


This on a debian-10.10 server


Hitting this again, now with debian 11.3

I assume this is some perl-issue, maybe related to upgrades inbetween.

Unfortunately I am still no perl-guru, maybe someone could have a look.

To me it seems related to the "taperscan"parameter.




Re: amcrypt: deprecated key derivation used

2022-05-06 Thread Stefan G. Weichinger



Am 04.05.22 um 16:46 schrieb Exuvo:
Ah yes my RANDFILE was probably already created long ago when i 
initially set up encryption.


 From what i have read the random file is not really on most systems as 
it is only there to help with low entropy systems (ie server that does 
nothing most of the time).
Each time openssl runs it uses that file (if specified) for random seeds 
and at command end it replaces the file with 256 new bytes of randomness 
for the next invocation.

It is not needed for decryption.

 From the man page the digest is only used to create the real encryption 
key from the text key you supply. It should not affect speed at all.
The default digest is sha-256, sha-512 just has more bits. The only 
thing you would gain is more protection against brute force attacks i 
think.


Thanks, great.

As mentioned on Github, I still see issues with your crypt-script when 
combined with amsamba: that leads to dumps with "missing size line from 
sendbackup". Would be great to get that fixed as well.


I will try to streamline and cleanup my config and report the actual 
dumptype definition.


Re: amcrypt: deprecated key derivation used

2022-05-04 Thread Stefan G. Weichinger



Am 04.05.22 um 12:46 schrieb Stefan G. Weichinger:

Am 04.05.22 um 11:36 schrieb Exuvo:
Yeah the included ossl usage is using old key derivation. On my 
installation i have replaced amcrypt-ossl usage with:

# cat /etc/amanda/encrypt
#!/bin/bash

AMANDA_HOME=~amanda
PASSPHRASE=$AMANDA_HOME/.am_passphrase    # required
RANDFILE=$AMANDA_HOME/.rnd
export RANDFILE


at first things were failing, the "not found" was misleading me, as I 
assumed the wrapper file was missing (I decided to create 
"/usr/sbin/exuvo_crypt" ;-) ).


Turns out that the RANDFILE was missing, created one by:

backup:~$ dd if=/dev/urandom of=.rnd bs=256 count=1

I assume I should store/backup that one alongside the encryption 
passphrase somewhere? Is it needed for decryption?


First dump looks good now, on to some restore tests.

btw: I also read of "-md sha512" to speed up ... obsolete when using 
"-aes-256-ctr" maybe?


If I change encryption now it would be the time to get it right.

thanks so far!


Re: amcrypt: deprecated key derivation used

2022-05-04 Thread Stefan G. Weichinger

Am 04.05.22 um 11:36 schrieb Exuvo:
Yeah the included ossl usage is using old key derivation. On my 
installation i have replaced amcrypt-ossl usage with:

# cat /etc/amanda/encrypt
#!/bin/bash

AMANDA_HOME=~amanda
PASSPHRASE=$AMANDA_HOME/.am_passphrase    # required
RANDFILE=$AMANDA_HOME/.rnd
export RANDFILE

if [ "$1" = -d ]; then
     /usr/bin/openssl enc -pbkdf2 -d -aes-256-ctr -salt -pass fd:3 3< 
"${PASSPHRASE}"

else
     /usr/bin/openssl enc -pbkdf2 -e -aes-256-ctr -salt -pass fd:3 3< 
"${PASSPHRASE}"

fi

pbkdf2 to fix the deprecated key derivation, aes-256-ctr for better and 
faster encryption (ctr can be parallelized). Also padding is not needed 
with this encryption method.
But this obviously cant open old backups so keep this file separate from 
amcrypt-ossl so you can still use the old one for old backups.


Sounds great, thanks! I currently try to adjust it to the debian 
environment (amanda user "backup", paths etc).



While i am at it here is my file for better compression using zstd:
# cat /etc/amanda/zstd-compression3
#!/bin/bash
if [[ "$1" == "-d" ]]; then
     zstd -dqcf
else
     zstd -qc -3 -T0
fi


That might be a future improvement. I already have a dumptype doing 
that, according to an earlier thread you started (?).


Re: amcrypt: deprecated key derivation used

2022-05-04 Thread Stefan G. Weichinger

Am 17.12.21 um 04:35 schrieb Stefan G. Weichinger:


That's an old one, but as far as I see, not fixed yet:

I get problems with DLEs using amcrypt-ossl.

The message in amstatus contains "deprecated key derivation used".

This seems to point to something like this issue:

https://unix.stackexchange.com/questions/507131/openssl-1-1-1b-warning-using-iter-or-pbkdf2-would-be-better-while-decrypting 



Ah, and we have an zmanda/amanda issue for more than two years here also:

https://github.com/zmanda/amanda/issues/112


Still hitting this issue.

patched a server today, but I get FAILED:

"[missing size line from sendbackup]"

Anyone having seen that?


Re: debian 11, vtapes not found correctly anymore

2022-04-21 Thread Stefan G. Weichinger

Am 21.04.22 um 09:12 schrieb Stefan G. Weichinger:

Am 21.04.22 um 09:03 schrieb Kees Meijs | Nefos:

Hi,

Anything in /var/log/kern.log and the like about the "amcheck-device: 
new Amanda::Changer::Error: type='failed', reason='notfound', 
message='No removable disk mounted on '/mnt/externaldisk01'' maybe?


Maybe there's a clear error about the drive not being mounted and the 
root cause if outside of AMANDA.


It gets mounted.

lines in kern.log like:

Apr 21 08:52:15 server kernel: [5431442.882110] EXT4-fs (sdb1): mounted 
filesystem with ordered data mode. Opts: (null)
Apr 21 08:53:53 server kernel: [5431540.968731] EXT4-fs (sdb1): mounted 
filesystem with ordered data mode. Opts: (null)


Something else:

when I run "amadmin vtape tape" it tells me it looks for a tape on the 
non-attached usb disk. Or a new tape.


This explains why it accepts a relabeled tape (it's new then).

So something with the rotation and/or number of tapes seems wrong, right?

There are 15 slots on the currently attached disk.

"num_slot" = 14

When I reduce tapecycle to 14 tapes, the vtape is recognized OK!

I had tapecycle = 28 tapes before: 2 changers with 14 tapes each.

That has worked so far, I wonder why.

I adjust to 14 now and monitor the next runs.


Re: debian 11, vtapes not found correctly anymore

2022-04-21 Thread Stefan G. Weichinger

Am 21.04.22 um 09:03 schrieb Kees Meijs | Nefos:

Hi,

Anything in /var/log/kern.log and the like about the "amcheck-device: 
new Amanda::Changer::Error: type='failed', reason='notfound', 
message='No removable disk mounted on '/mnt/externaldisk01'' maybe?


Maybe there's a clear error about the drive not being mounted and the 
root cause if outside of AMANDA.


It gets mounted.

lines in kern.log like:

Apr 21 08:52:15 server kernel: [5431442.882110] EXT4-fs (sdb1): mounted 
filesystem with ordered data mode. Opts: (null)
Apr 21 08:53:53 server kernel: [5431540.968731] EXT4-fs (sdb1): mounted 
filesystem with ordered data mode. Opts: (null)


no errors there

thanks




Re: debian 11, vtapes not found correctly anymore

2022-04-21 Thread Stefan G. Weichinger

Am 14.04.22 um 07:28 schrieb Stefan G. Weichinger:

Am 13.04.22 um 15:29 schrieb Jose M Calhariz:


My setup is more simple

I have several RAID6, each one is always mounted, on /vTapes1,
/vTapes2, ... and there is a /vLibrary where there is all slots
directories that are symbolic links into /vTapes*

Have you look inside /mnt/externaldisk* to check if everything seams
OK?


Sure.

Thanks for describing your more basic setup.

Things I am not sure of:

* taperscan: back then JLM told me to use "lexical" because otherwise it 
didn't work (years ago, I think it was related to this "chg-aggregate 
combines multiple chg-disk changers")


* labels: autolabel, metalabel ... maybe I have that wrong

* and why did it break? It worked ok in these 2 sites for years. Both 
Debian, so maybe some perl-library changed or something like that?


So I am basically relabeling a tape or two every day on these 2 amanda 
servers to keep the backups working.



Still not solved. See debug file:

 cat amcheck-device.20220421085207.debug
Thu Apr 21 08:52:07.730038795 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: pid 26173 ruid 34 euid 34 version 3.5.1: start at Thu 
Apr 21 08:52:07 2022
Thu Apr 21 08:52:07.730194892 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: reading config file /etc/amanda/vtape_usb/amanda.conf
Thu Apr 21 08:52:07.747494862 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: pid 26173 ruid 34 euid 34 version 3.5.1: rename at Thu 
Apr 21 08:52:07 2022
Thu Apr 21 08:52:07.756287648 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: chg-disk: Dir /mnt/externaldisk01
Thu Apr 21 08:52:07.756300940 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: chg-disk: Using statefile '/mnt/externaldisk01/state'
Thu Apr 21 08:52:15.465003748 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: new Amanda::Changer::Error: type='failed', 
reason='notfound', message='No removable disk mounted on 
'/mnt/externaldisk01''
Thu Apr 21 08:52:15.465126422 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: Changer 'disk01' not quit
Thu Apr 21 08:52:15.466118198 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: chg-disk: Dir /mnt/externaldisk02
Thu Apr 21 08:52:15.466139627 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: chg-disk: Using statefile '/mnt/externaldisk02/state'
Thu Apr 21 08:52:15.664007096 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: chg-aggragte: Using statefile 
'/etc/amanda/vtape_usb/aggregate.stats'
Thu Apr 21 08:52:16.499061286 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: new Amanda::Changer::Error: type='failed', 
reason='notfound', message='No acceptable volumes found'
Thu Apr 21 08:52:16.654700142 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: ru_utime   : 0
Thu Apr 21 08:52:16.654750749 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: ru_stime   : 0
Thu Apr 21 08:52:16.654764307 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: ru_maxrss  : 39176
Thu Apr 21 08:52:16.654778894 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: ru_ixrss   : 0
Thu Apr 21 08:52:16.654789168 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: ru_idrss   : 0
Thu Apr 21 08:52:16.654798856 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: ru_isrss   : 0
Thu Apr 21 08:52:16.654808877 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: ru_minflt  : 8815
Thu Apr 21 08:52:16.654818861 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: ru_majflt  : 0
Thu Apr 21 08:52:16.654828484 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: ru_nswap   : 0
Thu Apr 21 08:52:16.654838003 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: ru_inblock : 1168
Thu Apr 21 08:52:16.654847930 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: ru_oublock : 32
Thu Apr 21 08:52:16.654857650 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: ru_msgsnd  : 0
Thu Apr 21 08:52:16.654867158 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: ru_msgrcv  : 0
Thu Apr 21 08:52:16.654876613 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: ru_nsignals: 0
Thu Apr 21 08:52:16.654886249 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: ru_nvcsw   : 54
Thu Apr 21 08:52:16.654895860 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: ru_nivcsw  : 50
Thu Apr 21 08:52:16.655278285 2022: pid 26173: thd-0x55f8b31aac00: 
amcheck-device: pid 26173 finish time Thu Apr 21 08:52:16 2022


"amadmin vtape_usb inventory" shows all the vtapes correctly.



Re: debian 11, vtapes not found correctly anymore

2022-04-13 Thread Stefan G. Weichinger

Am 13.04.22 um 15:29 schrieb Jose M Calhariz:


My setup is more simple

I have several RAID6, each one is always mounted, on /vTapes1,
/vTapes2, ... and there is a /vLibrary where there is all slots
directories that are symbolic links into /vTapes*

Have you look inside /mnt/externaldisk* to check if everything seams
OK?


Sure.

Thanks for describing your more basic setup.

Things I am not sure of:

* taperscan: back then JLM told me to use "lexical" because otherwise it 
didn't work (years ago, I think it was related to this "chg-aggregate 
combines multiple chg-disk changers")


* labels: autolabel, metalabel ... maybe I have that wrong

* and why did it break? It worked ok in these 2 sites for years. Both 
Debian, so maybe some perl-library changed or something like that?


So I am basically relabeling a tape or two every day on these 2 amanda 
servers to keep the backups working.


Re: debian 11, vtapes not found correctly anymore

2022-04-13 Thread Stefan G. Weichinger

Am 13.04.22 um 10:37 schrieb Stefan G. Weichinger:


... up to changer disk3. all the disks are listed in /etc/fstab


correction: up to changer disk7, sure



Re: debian 11, vtapes not found correctly anymore

2022-04-13 Thread Stefan G. Weichinger

Am 13.04.22 um 09:56 schrieb Stefan G. Weichinger:


Am 12.04.22 um 17:57 schrieb Jose M Calhariz:

Hi,

To tell that I have 1 instalation of amanda in Debian 11 with vtapes
working flawless.  Can we share the setup?


Sure, I'd appreciate that.

I have several vtape-installations working OK, it seems that the special 
config with aggregating 2 or more chg-disk-changers is the problem here.


I will post such a config in the next hour.


OK, showing.

I won't post the whole amanda.conf as it is long and maybe irrelevant 
for this topic.


Just the parts around the changers, and some scheduling parameters.

See "amadmin vtape config" here:

https://gist.github.com/stefangweichinger/693eeb2c0c02d03abb31b53073352dd1

(yes, way too many old dumptypes in there etc)

-

I take the setup from a site where we have 7 external USB drives with 
vtapes on them.


So 7 changer devices like this:

define changer disk1 {
tpchanger "chg-disk:/mnt/externaldisk1"
property "num_slot" "20"
property "auto-create-slot" "yes"
property "removable" "yes"
property "MOUNT" "yes"
property "UMOUNT" "yes"
property "UMOUNT-LOCKFILE" 
"/etc/amanda/vtape/externaldisk1.lock"
property "UMOUNT-DELAY" "1"
}

define changer disk2 {
tpchanger "chg-disk:/mnt/externaldisk2"
property "num_slot" "20"
property "auto-create-slot" "yes"
property "removable" "yes"
property "MOUNT" "yes"
property "UMOUNT" "yes"
property "UMOUNT-LOCKFILE" 
"/etc/amanda/vtape/externaldisk2.lock"
property "UMOUNT-DELAY" "1"
}

... up to changer disk3. all the disks are listed in /etc/fstab

UUID=5a5a9927-995f-4f0f-98ff-d222561f84ff /mnt/externaldisk1 ext4 
relatime,noauto,user 0 2


and are (un-)mounted by the backup-user at amanda runtime.

-

The 7 chg-disk changers are aggregated into one "tpchanger" via 
chg-aggregate:


define changer aggregate {
tpchanger "chg-aggregate:{disk1,disk2,disk3,disk4,disk5,disk6,disk7}"
property "state_filename" "/etc/amanda/vtape/aggregate.stats"
property "allow-missing-changer" "yes"
}

And that one is used via:

tpchanger   "aggregate"

- The "design goal" here was:

an employee should swap the usb-disk every week.

Amanda should rotate vtapes on one disk if they forget to change the disk.

Amanda should use vtapes on each external disk, regardless which disk is 
attached.


That works OK.

Right now it's only that issue around "why aren't existing vtapes 
recognized and used?".


--

I also have this:

define taperscan "lexi" {
plugin "lexical"
#plugin "traditional"
}

DEFINE STORAGE vtape {
  TPCHANGER   "aggregate"
  LABELSTR"vtape-[0-9]*-[0-9]*"
  TAPEPOOL"vtape"
  RUNTAPES2
  TAPERSCAN   "lexi"
  TAPETYPE"vtape"
  TAPERALGO   FIRSTFIT

storage "vtape"

Maybe redundant, I don't know.

-

The tapetype used:

define tapetype vtape {
comment "Dump onto hard disk"
length 160 GB
part-size 1 GB
part-cache-type memory
}










Re: debian 11, vtapes not found correctly anymore

2022-04-13 Thread Stefan G. Weichinger



Am 12.04.22 um 17:57 schrieb Jose M Calhariz:

Hi,

To tell that I have 1 instalation of amanda in Debian 11 with vtapes
working flawless.  Can we share the setup?


Sure, I'd appreciate that.

I have several vtape-installations working OK, it seems that the special 
config with aggregating 2 or more chg-disk-changers is the problem here.


I will post such a config in the next hour.


Re: debian 11, vtapes not found correctly anymore

2022-04-12 Thread Stefan G. Weichinger

Am 05.04.22 um 14:39 schrieb Stefan G. Weichinger:


At two amanda-installations on Debian11 I see this lately:

both setups use an aggregate setup of multiple external usb disks with 
vtapes on them.


That worked well for years.

Now I get errors because amanda does not find valid tapes on one or more 
external disks.


"amtape config inventory" shows the vtapes, but "amcheck -t config" 
tells me "No acceptable volumes found".


If I relabel a tape "amlabel -f vtape_usb vtape_usb-002-004  slot 1:4" 
it gets detected again:


$ amcheck -t vtape_usb
Amanda Tape Server Host Check
-
mount: /mnt/externaldisk01: can't find 
UUID=98cb03d6-e95e-462d-a823-6011b37c9f42.

slot 1:4: volume 'vtape_usb-002-004'
Will write to volume 'vtape_usb-002-004' in slot 1:4.
NOTE: skipping tape-writable test
Server check took 6.673 seconds
(brought to you by Amanda 3.5.1)


(externaldisk01 is absent: OK, externaldisk02 is connected)

I assume I should grep through some debug logs. What and where to look for?


Still seeing these issues. I have to relabel every day, that is not the 
way a backup setup should work like.


See this tapelist, I wonder about that META column.

/etc/amanda/vtape_usb/tapelist
20220412084458 vtape_usb-002-006 reuse META:vtape_usb-003 BLOCKSIZE:32 
POOL:vtape_usb STORAGE:vtape_usb CONFIG:vtape_usb
20220411123301 vtape_usb-002-017 reuse META:vtape_usb-003 BLOCKSIZE:32 
POOL:vtape_usb STORAGE:vtape_usb CONFIG:vtape_usb
20220408224501 vtape_usb-002-016 reuse META:vtape_usb-003 BLOCKSIZE:32 
POOL:vtape_usb STORAGE:vtape_usb CONFIG:vtape_usb
20220408224501 vtape_usb-002-015 reuse META:vtape_usb-003 BLOCKSIZE:32 
POOL:vtape_usb STORAGE:vtape_usb CONFIG:vtape_usb
20220407224501 vtape_usb-002-014 reuse META:vtape_usb-003 BLOCKSIZE:32 
POOL:vtape_usb STORAGE:vtape_usb CONFIG:vtape_usb
20220407073600 vtape_usb-002-013 reuse META:vtape_usb-003 BLOCKSIZE:32 
POOL:vtape_usb STORAGE:vtape_usb CONFIG:vtape_usb
20220405224501 vtape_usb-002-005 reuse META:vtape_usb-002 BLOCKSIZE:32 
POOL:vtape_usb STORAGE:vtape_usb CONFIG:vtape_usb
20220405224501 vtape_usb-002-004 reuse META:vtape_usb-002 BLOCKSIZE:32 
POOL:vtape_usb STORAGE:vtape_usb CONFIG:vtape_usb
20220405132318 vtape_usb-002-003 reuse META:vtape_usb-002 BLOCKSIZE:32 
POOL:vtape_usb STORAGE:vtape_usb CONFIG:vtape_usb
20220405120227 vtape_usb-002-001 reuse META:vtape_usb-002 BLOCKSIZE:32 
POOL:vtape_usb STORAGE:vtape_usb CONFIG:vtape_usb
20220321224501 vtape_usb-002-012 reuse META:vtape_usb-002 BLOCKSIZE:32 
POOL:vtape_usb STORAGE:vtape_usb CONFIG:vtape_usb
20220320224501 vtape_usb-002-011 reuse META:vtape_usb-002 BLOCKSIZE:32 
POOL:vtape_usb STORAGE:vtape_usb CONFIG:vtape_usb
20220320224501 vtape_usb-002-010 reuse META:vtape_usb-002 BLOCKSIZE:32 
POOL:vtape_usb STORAGE:vtape_usb CONFIG:vtape_usb
20220319224501 vtape_usb-002-009 reuse META:vtape_usb-002 BLOCKSIZE:32 
POOL:vtape_usb STORAGE:vtape_usb CONFIG:vtape_usb
20220319224501 vtape_usb-002-008 reuse META:vtape_usb-002 BLOCKSIZE:32 
POOL:vtape_usb STORAGE:vtape_usb CONFIG:vtape_usb
20220318224501 vtape_usb-002-007 reuse META:vtape_usb-002 BLOCKSIZE:32 
POOL:vtape_usb STORAGE:vtape_usb CONFIG:vtape_usb



what about that meta label?

Is it a problem that it is duplicate?






debian 11, vtapes not found correctly anymore

2022-04-05 Thread Stefan G. Weichinger



At two amanda-installations on Debian11 I see this lately:

both setups use an aggregate setup of multiple external usb disks with 
vtapes on them.


That worked well for years.

Now I get errors because amanda does not find valid tapes on one or more 
external disks.


"amtape config inventory" shows the vtapes, but "amcheck -t config" 
tells me "No acceptable volumes found".


If I relabel a tape "amlabel -f vtape_usb vtape_usb-002-004  slot 1:4" 
it gets detected again:


$ amcheck -t vtape_usb
Amanda Tape Server Host Check
-
mount: /mnt/externaldisk01: can't find 
UUID=98cb03d6-e95e-462d-a823-6011b37c9f42.

slot 1:4: volume 'vtape_usb-002-004'
Will write to volume 'vtape_usb-002-004' in slot 1:4.
NOTE: skipping tape-writable test
Server check took 6.673 seconds
(brought to you by Amanda 3.5.1)


(externaldisk01 is absent: OK, externaldisk02 is connected)

I assume I should grep through some debug logs. What and where to look for?

regards, Stefan


Re: using Google Cloud for virtual tapes

2022-04-05 Thread Stefan G. Weichinger

Am 04.04.22 um 23:33 schrieb Chris Hassell:

Google cloud works and works well enough indeed.   All of them work... but a 
non-block technique is capped at 5TB by almost all providers, and getting to 
there is complicated and can DOUBLE your storage cost over one month [only] if 
you cannot make temporaries without being charged.  (looking at you, Wasabi!!).

However either A or B or C for Google specifically...
A) the desired block size must be kept automatically small (it varies but ~40MB or 
smaller buffer or so for a 4GB system) ... and each DLE "tape" must be limited 
in size
B) the biggest block size can be used to store 5TB objects [max == 512MB] but 
the curl buffers will take ~2.5GB and must be hardcoded [currently] in the 
build.  Its too much for many systems.
C) the biggest block size can be used but google cannot FAIL EVEN ONCE ... or 
the cloud upload cannot be restarted and the DLE fails basically.   This 
doesn't succeed often on the way to 5TB.
D) the biggest block size can be used but Multi-Part must be turned off and a 
second DLE and later gets to be very very very slow to add on

Option D has NO limits to backups, but it is what needs the O(log N) check for 
single-stored blocks.

This currently does a O(N) check against earlier blocks to check the cloud 
storage total after file #1 every time.   Verrry slow at only 1000 per xaction.


@Chris, thanks for these infos.

Sounds a bit complicated, I have to see how I can start there.

I won't have very large DLEs anyway, that might help.

I have ~700GB per tape right now, that's not very much. Although 
bandwidth (=backup time window) also will be an issue here.




Re: using Google Cloud for virtual tapes

2022-04-04 Thread Stefan G. Weichinger

Am 29.03.22 um 15:34 schrieb Chris Hassell:

Google Cloud is somewhat difficult because they don't fully the support Amazon S3 
operations.   One cannot upload blocks and "CopyPart" them into a larger 
object.   Wasabi and S3 and others can do that.

There needs to be a simple overhaul of the "millions of blocks" upload 
technique (non-multipart backups) so that it can be done without O(n) checks for every 
DLE.


So is it usable? Or do I have to do some combo of local vtapes and 
amvaulting them into the cloud, maybe?




using Google Cloud for virtual tapes

2022-03-25 Thread Stefan G. Weichinger



At a customer I have to somehow move the backups into Google Cloud (the 
company moves everything there).


Does anyone already combine that with amanda somehow?

They forwarded me this:

https://www.cloudbooklet.com/gsutil-cp-copy-and-move-files-on-google-cloud/

I could think of using amvault to *copy* vtape content there.

And I find "Cloud Storage FUSE" which allows to mount such a storage bucket.

Does anyone here have experience with this and Amanda backups?

thanks, regards, Stefan


Re: Sent out a pull-request to upgrade the packaging system....

2022-03-18 Thread Stefan G. Weichinger

Am 17.03.22 um 22:28 schrieb Chris Hassell:
First of many.  It is not configured-and-tested as is but it is very 
close to our working version.


-Versioning based on tags from git

-An in-directory build as well as below-directory package build

-Solaris 10 and Solaris 11 packages

-Better handling of pre/post scripts (may need to update Debian ones??)

-Systemd services are included as well.

If I can get someone to look over it and give it a spin I’d really 
appreciate it.


I have more fixes for the installcheck directory to allow some good and 
solid self-tests.


Nice to see progress.

I assume you want us to try to build the code in that PR?

pls point to some howto, I can't remember the procedure anymore 
(autoconf? etc)


I can test on Debian 11, for example.


Re: example config for aes-256-ctr and zstd

2022-02-22 Thread Stefan G. Weichinger

Am 21.02.22 um 18:55 schrieb Exuvo:
In case anyone is interested i made a post about LTO-5 tapes that 
includes example configs for amanda that uses aes-256-ctr with password 
and zstd compression 
https://www.eevblog.com/forum/general-computing/lto-tape-usage-(modern-tape-drives)/ 


thanks for sharing.

May I maybe add it to the (small) collection here:

https://github.com/stefangweichinger/amanda-helpers

?

For sure you could even file a PR there, if you prefer.



amcrypt: deprecated key derivation used

2021-12-16 Thread Stefan G. Weichinger



That's an old one, but as far as I see, not fixed yet:

I get problems with DLEs using amcrypt-ossl.

The message in amstatus contains "deprecated key derivation used".

This seems to point to something like this issue:

https://unix.stackexchange.com/questions/507131/openssl-1-1-1b-warning-using-iter-or-pbkdf2-would-be-better-while-decrypting

Ah, and we have an zmanda/amanda issue for more than two years here also:

https://github.com/zmanda/amanda/issues/112

I suggested this patch:

https://github.com/stefangweichinger/amanda/commit/9c79d55c906ecf822f5d31b2cea69b679fb93572

but what about restore?

dumps encrcpted with the old script should be decrypted with other 
options than newer ones.


Does anyone have an idea how to write an according script for that?


Re: amanda vs bullseye 0/1

2021-12-06 Thread Stefan G. Weichinger

Am 04.12.21 um 20:32 schrieb Charles Curley:


You seem to have installed both the upstream amanda package and the
Debian package, and managed to conflate the two. That won't work.

Back out the upstream package entirely, and continue with the Debian
package. Remove the Debian packages. Kill off /home/amanda. Use find to
seek and destroy any file or directory with either owner or group
amanda. Get rid of any executables from the package. Get rid of the
entries in passwd and group. Kill off any executables. Then re-install
the Debian amanda packages.



Gene: I run amanda on debian bullseye for several customers without issues.

Yes, the user is "backup" and not "amanda" with the debian repo packages.


Re: tape problem

2021-12-06 Thread Stefan G. Weichinger

Am 06.12.21 um 15:39 schrieb David Simpson:

Further to Jose's questions.

Some config excerpt may help.

On blocksize, that is defined within the tapetype.

 blocksize 512 kbytes



And maybe try a run with amtapetype.






Re: Solaris 11.4 64 binaries

2021-11-10 Thread Stefan G. Weichinger

Am 10.11.21 um 05:36 schrieb Chris Hassell:
I have.  We have a community release due… way … way way overdue.   I’m 
afraid it’s been only myself doing it.    (A lot of change has been 
disruptive, I’m afraid).


Well, why not share development with the users within let's say github 
or so, and get feedback etc along the way?


It's hard to understand why Betsol completely ignores (at least in 
termes of patches, solving issues ...) the community edition for so long.


I think many of the ml-users here think so as well.




Re: Ordering the dumps?

2021-10-28 Thread Stefan G. Weichinger



Am 28.10.21 um 10:42 schrieb Diego Zuccato:

Il 28/10/2021 10:02, Stefan G. Weichinger ha scritto:

btw: it's recommended NOT to use "localhost" in disklist. There's even 
an FAQ for that.

Sorry, I can't find it. Could you please post a link?
Since I have some 'localhost' entries I'd probably have to change 'em.


I expected this question ;-)

Had to google myself, it seems not to be on wiki.zmanda.com

Found it here, wrote that years ago, I don't know if everything still is 
true.


https://docs.huihoo.com/amanda/2.5.x/topten.html#id2578555

-

full docs also there: https://docs.huihoo.com/amanda/2.5.x/index.html

I remember the amount of work converting and formatting, oh my ...


Re: Ordering the dumps?

2021-10-28 Thread Stefan G. Weichinger

Am 27.10.21 um 09:28 schrieb Olivier:

Hello,

My backups have not been runnig for some time due to a combination of
work from home and general power failure.

Now that I have restarted it, it fails for all the DLE with an error
like below:

localhost / lev 0  FAILED [dumps too big, 171153 KB, but cannot
 incremental dump skip-incr disk]
localhost /var lev 1  FAILED [dumps way too big, 124813 KB, must skip
 incremental dumps]

I have 1.6 TB in holding disk and no DLE is bigger than 1.1 TB, so any
DLE should fit inside the holding disk and inside the tapes of one run.

I know there is a way for Amanda to prioritize the DLE, but I am not
sure what configuration I should use.


I would start by only dumping selected DLEs or even a single one:

amdump config somehost /directory

This should get your runs into balance step by step (run the biggest 
ones separate at first, later runs might work with full set of DLEs).


btw: it's recommended NOT to use "localhost" in disklist. There's even 
an FAQ for that.


Re: amanda fails

2021-10-23 Thread Stefan G. Weichinger

Am 19.10.21 um 16:45 schrieb Charles Curley:

On Tue, 19 Oct 2021 10:19:28 +0200
"Stefan G. Weichinger"  wrote:


Switching the parameter "taperscan" from "lexical" to "traditional"
works around that now.


I don't have taperscan in either of my configurations, so I guess I'm
running on the default. The lack probably comes from more than a decade
of copying in old configurations every time I install.


That setup worked for a few days now with "traditional".

Checked right now, holding disk full, dumps not taped again.

amcheck did not find the next vtape, I assume the external USB drive (= 
another chg-disk changer in my aggregate setup) has been plugged in 
yesterday.


Edited back to "lexical", amflush works now.

A bit strange and not reliable, as it seems.


Re: amsamba breaks with samba-4.14?

2021-10-21 Thread Stefan G. Weichinger

Am 24.08.21 um 17:38 schrieb Chris Hassell:

@Tobias:

We have a new version of the amsamba app that you can try.   We need to get it 
out to get reviewed but it's nearly all changed.   We were hoping to get a 
community build out ... but that keeps getting delayed.

It's been refined and corrected in several ways.  We have seen various versions 
of samba behave differently.  I  cannot say I saw this below, though.


Chris, where is that amsamba app to test?




Re: amanda fails

2021-10-19 Thread Stefan G. Weichinger

Am 18.10.21 um 18:00 schrieb Charles Curley:

On Mon, 18 Oct 2021 14:38:46 +0200
"Stefan G. Weichinger"  wrote:


Anyone seeing this as well?


FAILURE DUMP SUMMARY:
taper: FATAL Can't use an undefined value as an ARRAY reference at
/usr/lib/x86_64-linux-gnu/amanda/perl/Amanda/ScanInventory.pm line
343.

This on a debian-10.10 server


Nope. Never seen it. I am using VTAPEs.


Updated to 10.11, no change.

Amanda does not dump or flush to tapes because of that error.

I am not aware of any changes to the amanda config lately.

Switching the parameter "taperscan" from "lexical" to "traditional" 
works around that now.


Although I remember that Jean-Louis told me back then to use "lexical" 
to make things work with my setup of aggregating multiple chg-disk changers.


  1   2   3   4   5   6   7   8   9   10   >